You are on page 1of 74

Reference Architecture:

Lenovo Client Virtualization


with Citrix XenDesktop

Last update: 24 June 2015


Version 1.2

Reference Architecture for Contains performance data and


Citrix XenDesktop sizing recommendations for
servers, storage, and networking

Describes variety of storage Contains detailed bill of materials


models including SAN storage for servers, storage, networking,
and hyper-converged systems and racks

Mike Perks
Kenny Bain
Pawan Sharma

Check for Updates


Table of Contents
1 Introduction ................................................................................................ 1

2 Architectural overview .............................................................................. 2

3 Component model ..................................................................................... 3

3.1 Citrix XenDesktop provisioning ................................................................................ 5


3.1.1 Provisioning Services.............................................................................................................. 5
3.1.2 Machine Creation Services ..................................................................................................... 6

3.2 Storage model .......................................................................................................... 7


3.3 Atlantis Computing ................................................................................................... 7
3.3.1 Atlantis hyper-converged volume ............................................................................................ 8
3.3.2 Atlantis simple hybrid volume .................................................................................................. 9
3.3.3 Atlantis simple in-memory volume ........................................................................................... 9

4 Operational model ................................................................................... 10

4.1 Operational model scenarios ................................................................................. 10


4.1.1 Enterprise operational model ................................................................................................ 11
4.1.2 SMB operational model ......................................................................................................... 11
4.1.3 Hyper-converged operational model ...................................................................................... 11

4.2 Hypervisor support ................................................................................................. 12


4.2.1 VMware ESXi ....................................................................................................................... 12
4.2.2 Citrix XenServer.................................................................................................................... 12
4.2.3 Microsoft Hyper-V ................................................................................................................. 13

4.3 Compute servers for virtual desktops ..................................................................... 14


4.3.1 Intel Xeon E5-2600 v3 processor family servers .................................................................... 14
4.3.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX......................................... 19

4.4 Compute servers for hosted shared desktops........................................................ 22


4.4.1 Intel Xeon E5-2600 v3 processor family servers .................................................................... 22

4.5 Compute servers for SMB virtual desktops ............................................................ 24


4.5.1 Intel Xeon E5-2600 v3 processor family servers with local storage ........................................ 24

4.6 Compute servers for hyper-converged systems..................................................... 26


4.6.1 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX......................................... 26

4.7 Graphics Acceleration ............................................................................................ 28


4.8 Management servers ............................................................................................. 30

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


ii version 1.2
4.9 Systems management ........................................................................................... 32
4.10 Shared storage ...................................................................................................... 33
4.10.1 IBM Storwize V7000 and IBM Storwize V3700 storage .......................................................... 34
4.10.2 IBM FlashSystem 840 with Atlantis USX storage acceleration ............................................... 36

4.11 Networking ............................................................................................................. 37


4.11.1 10 GbE networking ............................................................................................................... 37
4.11.2 10 GbE FCoE networking...................................................................................................... 38
4.11.3 Fibre Channel networking ..................................................................................................... 38
4.11.4 1 GbE administration networking........................................................................................... 38

4.12 Racks ..................................................................................................................... 39


4.13 Deployment models ............................................................................................... 39
4.13.1 Deployment example 1: Flex Solution with single Flex System chassis .................................. 39
4.13.2 Deployment example 2: Flex System with 4500 stateless users ............................................ 40
4.13.3 Deployment example 3: System x server with Storwize V7000 and FCoE .............................. 43
4.13.4 Deployment example 4: Hyper-converged using System x servers ........................................ 45
4.13.5 Deployment example 5: Graphics acceleration for 120 CAD users ........................................ 45

5 Appendix: Bill of materials ..................................................................... 47

5.1 BOM for enterprise compute servers ..................................................................... 47


5.2 BOM for hosted desktops....................................................................................... 52
5.3 BOM for SMB compute servers ............................................................................. 56
5.4 BOM for hyper-converged compute servers .......................................................... 57
5.5 BOM for enterprise and SMB management servers .............................................. 60
5.6 BOM for shared storage ......................................................................................... 64
5.7 BOM for networking ............................................................................................... 66
5.8 BOM for racks ........................................................................................................ 67
5.9 BOM for Flex System chassis ................................................................................ 67
5.10 BOM for OEM storage hardware ............................................................................ 68
5.11 BOM for OEM networking hardware ...................................................................... 68

Resources ....................................................................................................... 69

Document history .......................................................................................... 70

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


iii version 1.2
1 Introduction
The intended audience for this document is technical IT architects, system administrators, and managers who
are interested in server-based desktop virtualization and server-based computing (terminal services or
application virtualization) that uses Citrix XenDesktop. In this document, the term client virtualization is used as
to refer to all of these variations. Compare this term to server virtualization, which refers to the virtualization of
server-based business logic and databases.

This document describes the reference architecture for Citrix XenDesktop 7.6 and also supports the previous
versions of Citrix XenDesktop 5.6, 7.0, and 7.1. This document should be read with the Lenovo Client
Virtualization (LCV) base reference architecture document that is available at this website:
lenovopress.com/tips1275

The business problem, business value, requirements, and hardware details are described in the LCV base
reference architecture document and are not repeated here for brevity.

This document gives an architecture overview and logical component model of Citrix XenDesktop. The
document also provides the operational model of Citrix XenDesktop by combining Lenovo® hardware
platforms such as Flex System™, System x®, NeXtScale System™, and RackSwitch networking with OEM
hardware and software such as IBM Storwize and FlashSystem storage, and Atlantis Computing software. The
operational model presents performance benchmark measurements and discussion, sizing guidance, and
some example deployment models. The last section contains detailed bill of material configurations for each
piece of hardware.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


1 version 1.2
2 Architectural overview
Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with Citrix
XenDesktop. This reference architecture does not address the issues of remote access and authorization, data
traffic reduction, traffic monitoring, and general issues of multi-site deployment and network management. This
document limits the description to the components that are inside the customer’s intranet.

Active Directory, DNS SQL Database Server Provisioning Server


(PVS)

Hypervisor
Web Connection
Interface Broker
(Delivery Stateless Virtual Desktops
Controller)

Hypervisor
FIREWALL

FIREWALL

3rd party
VPN
Internet
Clients Hosted Desktops and Apps
Shared

Hypervisor
Storage

Dedicated Virtual Desktops


XenDesktop Pools

License Machine Creation


Server Services

Internal
Clients

Figure 1: LCV reference architecture with Citrix XenDesktop

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


2 version 1.2
3 Component model
Figure 2 is a layered view of the LCV solution that is mapped to the Citrix XenDesktop virtualization
infrastructure.

Client Devices Administrator


Desktop GUIs for
Studio Client Client Client
Receiver Receiver Receiver Support
Services

HTTP/HTTPS ICA

Web Interface Dedicated Stateless Hosted


Virtual Virtual Shared
Delivery Controller Desktops Desktops Desktops

Support Service Protocols


Management Protocols

and Apps
VM VM Directory
PVS and MCS Agent Agent Shared
Desktop
License Server DNS
VM VM Shared
Agent Agent Desktop DHCP
XenDesktop SQL Server
Shared
Management Services Local SSD Application OS Licensing
Storage
vCenter Server
Accelerator Accelerator Accelerator
vCenter SQL Server VM VM VM
Hypervisor Management Hypervisor Hypervisor Hypervisor Lenovo Thin
Client Manager
NFS and CIFS

XenDesktop VM Difference and User User


Data Store Repository Identity Disks Profiles Data Files
Shared Storage
Support
Lenovo Client Virtualization- Citrix XenDesktop Services

Figure 2: Component model with Citrix XenDesktop

Citrix XenDesktop features the following main components:

Desktop Studio Desktop Studio is the main administrator GUI for Citrix XenDesktop. It is used
to configure and manage all of the main entities, including servers, desktop
pools and provisioning, policy, and licensing.

Web Interface The Web Interface provides the user interface to the XenDesktop
environment. The Web Interface brokers user authentication, enumerates the
available desktops and, upon start, delivers a .ica file to the Citrix Receiver
on the user‘s local device to start a connection. The Independent Computing
Architecture (ICA) file contains configuration information for the Citrix receiver
to communicate with the virtual desktop. Because the Web Interface is a
critical component, redundant servers must be available to provide fault
tolerance.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


3 version 1.2
Delivery controller The Delivery controller is responsible for maintaining the proper level of idle
desktops to allow for instantaneous connections, monitoring the state of online
and connected desktops, and shutting down desktops as needed.

A XenDesktop farm is a larger grouping of virtual machine servers. Each


delivery controller in the XenDesktop acts as an XML server that is responsible
for brokering user authentication, resource enumeration, and desktop starting.
Because a failure in the XML service results in users being unable to start their
desktops, it is recommended that you configure multiple controllers per farm.

PVS and MCS Provisioning Services (PVS) is used to provision stateless desktops at a large
scale. Machine Creation Services (MCS) is used to provision dedicated or
stateless desktops in a quick and integrated manner. For more information,
see “Citrix XenDesktop provisioning” section on page 5.

License Server The Citrix License Server is responsible for managing the licenses for all
XenDesktop components. XenDesktop has a 30-day grace period that allows
the system to function normally for 30 days if the license server becomes
unavailable. This grace period offsets the complexity of otherwise building
redundancy into the license server.

XenDesktop SQL Server Each Citrix XenDesktop site requires an SQL Server database that is called
the data store, which used to centralize farm configuration information and
transaction logs. The data store maintains all static and dynamic information
about the XenDesktop environment. Because the XenDeskop SQL server is a
critical component, redundant servers must be available to provide fault
tolerance.

vCenter Server By using a single console, vCenter Server provides centralized management
of the virtual machines (VMs) for the VMware ESXi hypervisor. VMware
vCenter can be used to perform live migration (called VMware vMotion), which
allows a running VM to be moved from one physical server to another without
downtime.

Redundancy for vCenter Server is achieved through VMware high availability


(HA). The vCenter Server also contains a licensing server for VMware ESXi.

vCenter SQL Server vCenter Server for VMware ESXi hypervisor requires an SQL database. The
vCenter SQL server might be Microsoft® Data Engine (MSDE), Oracle, or SQL
Server. Because the vCenter SQL server is a critical component, redundant
servers must be available to provide fault tolerance. Customer SQL databases
(including respective redundancy) can be used.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


4 version 1.2
Client devices Citrix XenDesktop supports a broad set of devices and all major device
operating platforms, including Apple iOS, Google Android, and Google
ChromeOS. XenDesktop enables a rich, native experience on each device,
including support for gestures and multi-touch features, which customizes the
experience based on the type of device. Each client device has a Citrix
Receiver, which acts as the agent to communicate with the virtual desktop by
using the ICA/HDX protocol.

VDA Each VM needs a Citrix Virtual Desktop Agent (VDA) to capture desktop data
and send it to the Citrix Receiver in the client device. The VDA also emulates
keyboard and gestures sent from the receiver. ICA is the Citrix remote display
protocol for VDI.

Hypervisor XenDesktop has an open architecture that supports the use of several
different hypervisors, such as VMware ESXi (vSphere), Citrix XenServer, and
Microsoft Hyper-V.

Accelerator VM The optional accelerator VM in this case is Atlantis Computing, For more
information, see “Atlantis Computing” on page 7.

Shared storage Shared storage is used to store user profiles and user data files. Depending on
the provisioning model that is used, different data is stored for VM images. For
more information, see “Storage model” on section page 7.

For more information, see the Lenovo Client Virtualization base reference architecture document that is
available at this website: lenovopress.com/tips1275.

3.1 Citrix XenDesktop provisioning


Citrix XenDesktop features the following primary provisioning models:

 Provisioning Services (PVS)


 Machine Creation Services (MCS)

3.1.1 Provisioning Services


Hosted VDI desktops can be deployed with or without Citrix PVS. The advantage of the use of PVS is that you
can stream a single desktop image to create multiple virtual desktops on one or more servers in a data center.
Figure 3 shows the sequence of operations that are run by XenDesktop to deliver a hosted VDI virtual desktop.

When the virtual disk (vDisk) master image is available from the network, the VM on a target device no longer
needs its local hard disk drive (HDD) to operate; it boots directly from the network and behaves as if it were
running from a local drive on the target device, which is why PVS is recommended for stateless virtual
desktops. PVS often is not used for dedicated virtual desktops because the write cache is not stored on shared
storage.

PVS is also used with Microsoft Roaming Profiles (MSRPs) so that the user’s profile information can be
separated out and reused. Profile data is available from shared storage.

It is a best practice to use snapshots for changes to the master VM images and also keep copies as a backup.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


5 version 1.2
Request for vDisk

Streaming vDisk

Provisioning Services Server


Virtual Desktop

Write Master Snapshot


Cache Image
vDisk

Figure 3: Using PVS for a stateless model

3.1.2 Machine Creation Services


Unlike PVS, MCS does not require more servers. Instead, it uses integrated functionality that is built into the
hypervisor (VMware ESXi, Citrix XenServer, or Microsoft Hyper-V) and communicates through the respective
APIs. Each desktop has one difference disk and one identity disk (as shown in Figure 4). The difference disk is
used to capture any changes that are made to the master image. The identity disk is used to store information,
such as device name and password.

Figure 4: MCS image and difference/identity disk storage model

The following types of Image Assignment Models for MCS are available:

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


6 version 1.2
 Pooled-random: Desktops are assigned randomly. When they log off, the desktop is free for another
user. When rebooted, any changes that were made are destroyed.
 Pooled-static: Desktops are permanently assigned to a single user. When a user logs off, only that
user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are
made are destroyed.
 Dedicated: Desktops are permanently assigned to a single user. When a user logs off, only that user
can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made
persist across subsequent restarts.

MCS thin provisions each desktop from a master image by using built-in technology to provide each desktop
with a unique identity. Only changes that are made to the desktop use more disk space. For this reason, MCS
dedicated desktops are used for dedicated desktops.

3.2 Storage model


This section describes the different types of shared data stored for stateless and dedicated desktops.

Stateless and dedicated virtual desktops should have the following common shared storage items:

 The master VM image and snapshots are stored by using Network File System (NFS) or block I/O
shared storage.
 The paging file (or vSwap) is transient data that can be redirected to NFS storage. In general, it is
recommended to disable swapping, which reduces storage use (shared or local). The desktop memory
size should be chosen to match the user workload rather than depending on a smaller image and
swapping, which reduces overall desktop performance.
 User profiles (from MSRP) are stored by using Common Internet File System (CIFS).
 User data files are stored by using CIFS.

Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on
NFS or block I/O shared storage:

 Difference disks are used to store user’s changes to the base VM image. The difference disks are per
user and can become quite large for dedicated desktops.
 Identity disks are used to store the computer name and password and are small.
 Stateless desktops can use local solid-state drive (SSD) storage for the PVS write cache, which is
used to store all image writes on local SSD storage. These image writes are discarded when the VM is
shut down.

3.3 Atlantis Computing


Atlantis Computing provides a software-defined storage solution, which can deliver better performance than
physical PC and reduce storage requirements by up to 95% in virtual desktop environments of all types. The
key is Atlantis HyperDup content-aware data services, which fundamentally changes the way VMs use storage.
This change reduces the storage footprints by up to 95% while minimizing (and in some cases, entirely
eliminating) I/O to external storage. The net effect is a reduced CAPEX and a marked increase in performance
to start, log in, start applications, search, and use virtual desktops or hosted desktops and applications. Atlantis
software uses random access memory (RAM) for write-back caching of data blocks, real-time inline

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


7 version 1.2
de-duplication of data, coalescing of blocks, and compression, which significantly reduces the data that is
cached and persistently stored in addition to greatly reducing network traffic.

Atlantis software works with any type of heterogeneous storage, including server RAM, direct-attached storage
(DAS), SAN, or network-attached storage (NAS). It is provided as a VMware ESXi or Citrix XenServer
compatible VM that presents the virtualized storage to the hypervisor as a native data store, which makes
deployment and integration straightforward. Atlantis Computing also provides other utilities for managing VMs
and backing up and recovering data stores.

Atlantis provides a number of volume types suitable for virtual desktops and shared desktops. Different volume
types support different application requirements and deployment models. Table 1 compares the Atlantis
volume types.

Table 1: Atlantis Volume Types


Volume Min Number Performance Capacity
HA Comments
Type of Servers Tier Tier

Hyper- Memory or Good balance between


Cluster of 3 DAS USX HA
converged local flash performance and capacity

Simple Memory Shared Functionally equivalent to


1 Hypervisor HA
Hybrid or flash storage Atlantis ILIO Persistent VDI

Simple Shared Good performance, but lower


1 Local flash Hypervisor HA
All Flash flash capacity

Simple Memory Memory N/A (daily Functionally equivalent to


1
In-Memory or flash or flash backup) Atlantis ILIO Diskless VDI

3.3.1 Atlantis hyper-converged volume


Atlantis hyper-converged volumes are a hybrid between memory or local flash for accelerating performance
and direct access storage (DAS) for capacity and provide a good balance between performance and capacity
needed for virtual desktops.

As shown in Figure 5, hyper-converged volumes are clustered across three or more servers and have built-in
resiliency in which the volume can be migrated to other servers in the cluster in case of server failure or
entering maintenance mode. Hyper-converged volumes are supported for ESXi and XenServer.

Figure 5: Atlantis USX Hyper-converged Cluster

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


8 version 1.2
3.3.2 Atlantis simple hybrid volume
Atlantis simple hybrid volumes are targeted at dedicated virtual desktop environments. This volume type
provides the optimal solution for desktop virtualization customers that are using traditional or existing storage
technologies that are optimized by Atlantis software with server RAM. In this scenario, Atlantis employs
memory as a tier and uses a small amount of server RAM for all I/O processing while using the existing SAN,
NAS, or all-flash arrays storage as the primary storage. Atlantis storage optimizations increase the number of
desktops that the storage can support by up to 20 times while improving performance. Disk-backed
configurations can use various different storage types, including host-based flash memory cards, external
all-flash arrays, and conventional spinning disk arrays.

A variation of the simple hybrid volume type is the simple all-flash volume that uses fast, low-latency shared
flash storage whereby very little RAM is used and all I/O requests are sent to the flash storage after the inline
de-duplication and compression are performed.

This reference architecture concentrates on the simple hybrid volume type for dedicated desktops, stateless
desktops that use local SSDs, and host-shared desktops and applications. To cover the widest variety of
shared storage, the simple all-flash volume type is not considered.

3.3.3 Atlantis simple in-memory volume


Atlantis simple in-memory volumes eliminate storage from stateless VDI deployments by using local server
RAM and the in-memory storage optimization technology. Server RAM is used as the primary storage for
stateless virtual desktops, which ensures that read and write I/O occurs at memory speeds and eliminates
network traffic. An option allows for in-line compression and decompression to reduce the RAM usage.
SnapClone technology is used to persist the data store in case of VM reboots, power outages, or other failures.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


9 version 1.2
4 Operational model
This section describes the options for mapping the logical components of a client virtualization solution onto
hardware and software. The “Operational model scenarios” section gives an overview of the available
mappings and has pointers into the other sections for the related hardware. Each subsection contains
performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM
configurations that are described in section 0 on page 47. The last part of this section contains some
deployment models for example customer scenarios.

4.1 Operational model scenarios


Figure 6 shows the following operational models (solutions) in Lenovo Client Virtualization: enterprise,
small-medium business (SMB), and hyper-converged.

Traditional Converged Hyper-converged

Compute/Management Servers Flex System Chassis Compute/Management Servers


• x3550, x3650, nx360 Compute/Management Servers • x3650
Hypervisor • Flex System x240 Hypervisor
Enterprise (>600 users)

• ESXi, XenServer Hypervisor • ESXi, XenServer


Graphics Acceleration • ESXi Graphics Acceleration
• NVIDIA GRID K1 or K2 Shared Storage • NVIDIA GRID K1 or K2
Shared Storage • IBM Storwize V7000 Shared Storage
• IBM Storwize V7000 • IBM FlashSystem 840 • Not applicable
• IBM FlashSystem 840 Networking Networking
Networking • 10GbE • 10GbE
• 10GbE • 10GbE FCoE
• 10GbE FCoE • 8 or 16 Gb FC
• 8 or 16 Gb FC

Compute/Management Servers
• x3550, x3650, nx360
SMB (<600 users)

Hypervisor
• ESXi, XenServer, Hyper-V
Graphics Acceleration Not Recommended
• NVIDIA GRID K1 or K2
Shared Storage
• IBM Storwize V3700
Networking
• 10GbE

Figure 6: Operational model scenarios

The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 is
termed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. The
last column in Figure 6 (labelled “hyper-converged”) spans both halves because a hyper-converged solution
can be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000).

The horizontal axis is split into three columns. The left-most column represents traditional rack-based systems
with top-of-rack (TOR) switches and shared storage. The middle column represents converged systems where
the compute, networking, and sometimes storage are converged into a chassis, such as the Flex System. The

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


10 version 1.2
right-most column represents hyper-converged systems and the software that is used in these systems. For
the purposes of this reference architecture, the traditional and converged columns are merged for enterprise
solutions; the only significant differences are the networking, form factor and capabilities of the compute
servers.

Converged systems are not generally recommended for the SMB space because the converged hardware
chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the
converged chassis can be used for other workloads to make this hardware architecture more cost-effective.

4.1.1 Enterprise operational model


For the enterprise operational model, see the following sections for more information about each component,
its performance, and sizing guidance:

 4.2 Hypervisor support


 4.3 Compute servers for virtual desktops
 4.4 Compute servers for hosted shared desktops
 4.7 Graphics Acceleration
 4.8 Management servers
 4.9 Systems management
 4.10 Shared storage
 4.11 Networking
 4.12 Racks

To show the enterprise operational model for different sized customer environments, four different sizing
models are provided for supporting 600, 1500, 4500, and 10000 users.

4.1.2 SMB operational model


For the SMB operational model, see the following sections for more information about each component, its
performance, and sizing guidance:

 4.2 Hypervisor support


 4.5 Compute servers for SMB virtual desktops
 4.7 Graphics Acceleration
 4.8 Management servers (may not be needed)
 4.9 Systems management
 4.10 Shared storage (see section on IBM Storwize V3700)
 4.11 Networking (see section on 10 GbE and iSCSI)
 4.12 Racks (use smaller rack or no rack at all)

To show the SMB operational model for different sized customer environments, four different sizing models are
provided for supporting 75, 150, 300, and 600 users.

4.1.3 Hyper-converged operational model


For the hyper-converged operational model, see the following sections for more information about each
component, its performance, and sizing guidance:

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


11 version 1.2
 4.2 Hypervisor support
 4.6 Compute servers for hyper-converged
 4.7 Graphics Acceleration
 4.8 Management servers
 4.9 Systems management
 4.11.1 10 GbE networking
 4.12 Racks

To show the hyper-converged operational model for different sized customer environments, four different sizing
models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a
hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage.

4.2 Hypervisor support


The Citrix XenDesktop reference architecture was tested with VMware ESXi 6.0, XenServer 6.5, and Microsoft
Hyper-V 2012.

4.2.1 VMware ESXi


The VMware ESXi hypervisor is convenient because it can start from a USB flash drive and does not require
any more local storage. Alternatively, ESXi can be booted from SAN shared storage. For the VMware ESXi
hypervisor, the number of vCenter clusters depends on the number of compute servers.

For stateless desktops, local SSDs can be used to store the base image and pooled VMs for improved
performance. Two replicas must be stored for each base image. Each stateless virtual desktop requires a
linked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high-speed 400 GB
SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 800 GB SSDs might be
needed. Because of the stateless nature of the architecture, there is little added value in configuring reliable
SSDs in more redundant configurations.

To use the latest version ESXi, obtain the image from the following website and upgrade by using vCenter:
ibm.com/systems/x/os/vmware/.

4.2.2 Citrix XenServer


For the Citrix XenServer hypervisor, it is necessary to install the OS onto drives in each compute server.

For stateless desktops, local SSDs can be used to improve performance by storing the delta disk for MCS or
the write-back cache for PVS. Each stateless virtual desktop requires a cache, which tends to grow over time
until the virtual desktop is rebooted. The size of the write-back cache depends on the environment. Two
enterprise high-speed 200 GB SSDs in a RAID 0 configuration should be sufficient for most user scenarios;
however, 400 GB (or even 800 GB) SSDs might be needed. Because of the stateless nature of the architecture,
there is little added value in configuring reliable SSDs in more redundant configurations.

The Flex System x240 compute nodes has two 2.5” drives. It is possible to both install XenServer and store
stateless virtual desktops on the same two drives by using larger capacity SSDs. The System x3550 and
System x3560 rack servers do not have this restriction and use separate HDDs for Hyper-V and SSDs for local
stateless virtual desktops.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


12 version 1.2
For Internet Explorer 11, the software rendering option must be used for flash videos to play correctly with
Login VSI 4.1.3.

4.2.3 Microsoft Hyper-V


For the Microsoft Hyper-V hypervisor, it is necessary to install the OS onto drives in each compute server. For
anything more than a small number of compute servers, it is worth the effort of setting up an automated
installation from a USB flash drive by using Windows Assessment and Deployment Kit (ADK).

The Flex System x240 compute nodes has two 2.5” drives. It is possible to both install XenServer and store
stateless virtual desktops on the same two drives by using larger capacity SSDs. The System x3550 and
System x3560 rack servers do not have this restriction and use separate HDDs for Hyper-V and SSDs for local
stateless virtual desktops.

Dynamic memory provides the capability to overcommit system memory and allow more VMs at the expense of
paging. In normal circumstances, each compute server must have 20% extra memory to allow failover. When a
server goes down and the VMs are moved to other servers, there should be no noticeable performance
degradation to VMs that is already on that server. The recommendation is to use dynamic memory, but it
should only affect the system when a compute server fails and its VMs are moved to other compute servers.

For Internet Explorer 11, the software rendering option must be used for flash videos to play correctly with
Login VSI 4.1.3.

The following configuration considerations are suggested for Hyper-V installations:

 Disable Large Send Offload (LSO) by using the Disable-NetadapterLSO command on the
Hyper-V compute server.

 Disable virtual machine queue (VMQ) on all interfaces by using the Disable-NetAdapterVmq
command on the Hyper-V compute server.

 Apply registry changes as per the Microsoft article that is found at this website:
support.microsoft.com/kb/2681638
The changes apply to Windows Server 2008 and Windows Server 2012.

 Disable VMQ and Internet Protocol Security (IPSec) task offloading flags in the Hyper-V settings for
the base VM.

 By default, storage is shared as hidden Admin shares (for example, e$) on Hyper-V compute server
and XenDesktop does not list Admin shares while adding the host. To make shared storage available
to XenDesktop, the volume should be shared on the Hyper-V compute server.

 Because the SCVMM library is large, it is recommended that it is accessed by using a remote share.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


13 version 1.2
4.3 Compute servers for virtual desktops
This section describes stateless and dedicated virtual desktop models. Stateless desktops that allow live
migration of a VM from one physical server to another are considered the same as dedicated desktops
because they both require shared storage. In some customer environments, stateless and dedicated desktop
models might be required, which requires a hybrid implementation.

Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerations
for the performance of the compute server, including the processor family and clock speed, the number of
processors, the speed and size of main memory, and local storage options.

The use of the Aero theme in Microsoft Windows® 7 or other intensive workloads has an effect on the
maximum number of virtual desktops that can be supported on each compute server. Windows 8 also requires
more processor resources than Windows 7, whereas little difference was observed between 32-bit and 64-bit
Windows 7. Although a slower processor can be used and still not exhaust the processor power, it is a good
policy to have excess capacity.

Another important consideration for compute servers is system memory. For stateless users, the typical range
of memory that is required for each desktop is 2 GB - 4 GB. For dedicated users, the range of memory for each
desktop is 2 GB - 6 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GB
of RAM per desktop. In general, power users that require larger memory sizes also require more virtual
processors. This reference architecture standardizes on 2 GB per desktop as the minimum requirement of a
Windows 7 desktop. The virtual desktop memory should be large enough so that swapping is not needed and
vSwap can be disabled.

For more information, see “BOM for enterprise compute servers” on page 47.

4.3.1 Intel Xeon E5-2600 v3 processor family servers


This section shows the performance results and sizing guidance for Lenovo compute servers that are based on
the Intel Xeon E5-2600 V3 processors (Haswell).

ESXi 6.0 performance results

Table 2 lists the Login VSI performance of E5-2600 v3 processors from Intel that use the Login VSI 4.1 office
worker workload with ESXi 6.0.

Table 2: ESXi 6.0 performance with office worker workload


Processor with office worker workload Hypervisor MCS Stateless MCS Dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W ESXi 6.0 239 users 234 users

Two E5-2670 v3 2.30 GHz, 12C 120W ESXi 6.0 283 users 291 users

Two E5-2680 v3 2.50 GHz, 12C 120W ESXi 6.0 284 users 301 users

Two E5-2690 v3 2.60 GHz, 12C 135W ESXi 6.0 301 users 306 users

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


14 version 1.2
Table 3 lists the results for the Login VSI 4.1 knowledge worker workload.

Table 3: ESXI 6.0 performance with knowledge worker workload


Processor with knowledge worker workload Hypervisor MCS Stateless MCS Dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W ESXi 6.0 244 users 237 users

Two E5-2690 v3 2.60 GHz, 12C 135W ESXi 6.0 252 users 246 users

Table 4 lists the results for the Login VSI 4.1 power worker workload.

Table 4: ESXI 6.0 performance with power worker workload


Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W ESXi 6.0 203 users 202 users

These results indicate the comparative processor performance. The following conclusions can be drawn:

 The performance for stateless and dedicated virtual desktops is similar.


 The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon
E5-2690v2 processor (IvyBridge), but uses less power and is less expensive.
 The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon
E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost.

Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series
processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W)
series processors. The cost per user increases with each processor but with a corresponding increase in user
density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might
outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might
not provide any added value. Some users require the fastest processor and for those users the Xeon
E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an
average configuration. The Xeon E5-2680v3 processor is recommended for power workers.

Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 5 gives a
comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that
Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1.

Table 5: Comparison of Login VSI 3.7 and 4.1 Workloads


Processor Workload MCS Stateless MCS Dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W 4.1 Office worker 239 users 234 users

Two E5-2650 v3 2.30 GHz, 10C 105W 3.7 Medium 286 users 286 users

Two E5-2690 v3 2.60 GHz, 12C 135W 4.1 Office worker 301 users 306 users

Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Medium 394 users 379 users

Two E5-2690 v3 2.60 GHz, 12C 135W 4.1 Knowledge worker 252 users 246 users

Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Heavy 348 users 319 users

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


15 version 1.2
Table 6 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by using
the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3
processors are 25% - 40% faster than the previous generation with the equivalent processor names.

Table 6: Comparison of E5-2600 v2 and E5-2600 v3 processors


Processor Workload MCS Stateless MCS Dedicated

Two E5-2650 v2 2.60 GHz, 8C 85W 3.7 Medium 204 users 204 users

Two E5-2650 v3 2.30 GHz, 10C 105W 3.7 Medium 286 users 286 users

Two E5-2690 v2 3.0 GHz, 10C 130W 3.7 Medium 268 users 257 users

Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Medium 394 users 379 users

Two E5-2690 v2 3.0 GHz, 10C 130W 3.7 Heavy 224 users 229 users

Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Heavy 348 users 319 users

XenServer 6.0 performance results

Table 7 lists the Login VSI performance of E5 2600 v3 processors from Intel that uses the Office worker
workload with XenServer 6.5.

Table 7: XenServer 6.5 performance with Office worker workload


Processor with office worker workload Hypervisor MCS stateless MCS dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W XenServer 6.5 225 users 224 users

Two E5-2680 v3 2.50 GHz, 12C 120W XenServer 6.5 274 users 278 users

Table 8 shows the results for the same comparison that uses the Knowledge worker workload.

Table 8: XenServer 6.5 performance with Knowledge worker workload


Processor with knowledge worker workload Hypervisor MCS stateless MCS dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W XenServer 6.5 210 users 208 users

Table 9 lists the results for the Login VSI 4.1 power worker workload.

Table 9: XenServer 6.5 performance with power worker workload


Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W XenServer 6.5 181 users 179 users

Hyper-V Server 2012 R2 performance results

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


16 version 1.2
Table 10 lists the Login VSI performance of E5 2600 v3 processors from Intel that uses the Office worker
workload with Hyper-V Server 2012 R2.

Table 10: Hyper-V Server 2012 R2 performance with Office worker workload
Processor with office worker workload Hypervisor MCS stateless MCS dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W Hyper-V 270 users 272 users

Table 11 shows the results for the same comparison that uses the Knowledge worker workload.

Table 11: Hyper-V Server 2012 R2 performance with Knowledge worker workload
Processor with knowledge worker workload Hypervisor MCS stateless MCS dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W Hyper-V 250 users 247 users

Table 12 lists the results for the Login VSI 4.1 power worker workload.

Table 12: Hyper-V Server 2012 R2 performance with power worker workload
Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W Hyper-V 216 users 214 users

Compute server recommendation for Office and Knowledge workers

The default recommendation is the Xeon E5-2650v3 processor and 512 GB of system memory because this
configuration provides the best coverage for a range of users up to 3GB of memory. For users who need VMs
that are larger than 3 GB, Lenovo recommends the use of up to 768 GB and the Xeon E5-2680v3 processor.

Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of
89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.

Table 13 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.

Table 13: Processor usage


Processor Workload Users per Server Stateless Utilization Dedicated Utilization

Two E5-2650 v3 Office worker 150 – normal node 78% 75%

Two E5-2650 v3 Office worker 180 – failover mode 88% 87%

Two E5-2680 v3 Knowledge worker 150 – normal node 78% 77%

Two E5-2680 v3 Knowledge worker 180 – failover mode 86% 86%

Table 14 lists the recommended number of virtual desktops per server for different VM memory. The number of
users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced
system of compute and memory.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


17 version 1.2
Table 14: Recommended number of virtual desktops per server
Processor E5-2650v3 E5-2650v3 E5-2680v3

VM memory size 2 GB (default) 3 GB 4 GB

System memory 384 GB 512 GB 768 GB

Desktops per server (normal mode) 150 140 150

Desktops per server (failover mode) 180 168 180

Table 15 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.

Table 15: Compute servers needed for different numbers of users and VM sizes
Desktop memory size (2 GB or 4 GB) 600 users 1500 users 4500 users 10000 users

Compute servers @150 users (normal) 5 10 30 68

Compute servers @180 users (failover) 4 8 25 56

Failover ratio 4:1 4:1 5:1 5:1

Desktop memory size (3 GB) 600 users 1500 users 4500 users 10000 users

Compute servers @140 users (normal) 5 11 33 72

Compute servers @168 users (failover) 4 9 27 60

Failover ratio 4:1 4.5:1 4.5:1 5:1

Compute server recommendation for Power workers

For power workers, the default recommendation is the Xeon E5-2680v3 processor and 384 GB of system
memory. For users who need VMs that are larger than 3 GB, Lenovo recommends up to 768 GB of system
memory.

Lenovo testing shows that 125 users per server is a good baseline and has an average of 79% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of
88% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.

Table 16 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


18 version 1.2
Table 16: Processor usage
Processor Workload Users per Server Stateless Utilization Dedicated Utilization

Two E5-2680 v3 Power worker 125 – normal node 78% 80%

Two E5-2680 v3 Power worker 150 – failover mode 88% 88%

Table 17 lists the recommended number of virtual desktops per server for different VM memory. The number of
power users is reduced to fit within the available memory and still maintain a reasonably balanced system of
compute and memory.

Table 17: Recommended number of virtual desktops per server


Processor E5-2680v3 E5-2680v3 E5-2680v3

VM memory size 3 GB (default) 4 GB 6 GB

System memory 384 GB 512 GB 768 GB

Desktops per server (normal mode) 105 105 105

Desktops per server (failover mode) 126 126 126

Table 18 lists the approximate number of compute servers that are needed for different numbers of power
users and VM sizes.

Table 18: Compute servers needed for different numbers of power users and VM sizes
Desktop memory size 600 users 1500 users 4500 users 10000 users

Compute servers @105 users (normal) 6 15 43 96

Compute servers @126 users (failover) 5 12 36 80

Failover ratio 5:1 4:1 5:1 5:1

4.3.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX
Atlantis USX provides storage optimization by using a 100% software solution. There is a cost for processor
and memory usage while offering decreased storage usage and increased input/output operations per second
(IOPS). This section contains performance measurements for processor and memory utilization of USX simple
hybrid volumes and gives an indication of the storage usage and performance.

For persistent desktops, USX simple hybrid volumes provide acceleration to shared storage. For environments
that are not using Atlantis USX, it is recommended to use linked clones to conserve shared storage space.
However, with Atlantis USX hybrid volumes, it is recommended to use full clones for persistent desktops
because they de-duplicate more efficiently than the linked clones and can support more desktops per server.

For stateless desktops, USX simple hybrid volumes provide storage acceleration for local SSDs. USX simple
in-memory volumes are not considered for stateless desktops because they require a large amount of memory.

Table 19 lists the Login VSI performance of USX simple hybrid volumes using two E5-2680 v3 processors and
ESXi 6.0.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


19 version 1.2
Table 19: ESXi 6.0 performance with USX hybrid volume
Login VSI 4.1 Workload Hypervisor MCS Stateless MCS Dedicated

Office worker ESXi 6.0 204 users 206 users

Knowledge worker ESXi 6.0 194 users 193 users

Power worker ESXi 6.0 155 users 156 users

A comparison of performance with and without the Atlantis VM shows that increase of 20 - 30% with the
Atlantis VM. This result is to be expected and it is recommended that higher-end processors (like the
E5-2680v3) are used to maximize density.

For office workers and knowledge workers, Lenovo testing shows that 125 users per server is a good baseline
and has an average of 74% usage of the processors in the server. If a server goes down, users on that server
must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150
users per server have an average of 83% usage of the processor. For power workers, Lenovo testing shows
that 100 users per server is a good baseline and has an average of 74% usage of the processors in the server.
If a server goes down, users on that server must be transferred to the remaining servers. For this degraded
failover case, Lenovo testing shows that 120 users per server have an average of 85% usage of the processor.
It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo
recommends a general failover ratio of 5:1.

Table 20 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.

Table 20: Processor usage


Processor Workload Users per Server Stateless Utilization Dedicated Utilization

Two E5-2680 v3 Knowledge worker 125 – normal node 72% 75%

Two E5-2680 v3 Knowledge worker 150 – failover mode 82% 83%

Two E5-2680 v3 Power worker 100 – normal node 74% 73%

Two E5-2680 v3 Power worker 120 – failover mode 85% 85%

It is still a best practice to separate the user folder and any other shared folders into separate storage. This
configuration leaves all of the other possible changes that might occur in a full clone to be stored in the simple
hybrid volume. This configuration is highly dependent on the environment. Testing by Atlantis Computing
suggests that 3.5 GB of unique data per persistent desktop is sufficient.

Atlantis USX uses de-duplication and compression to reduce the storage that is required for VMs. Experience
shows that a 85% - 95% reduction is possible depending on the VMs.

Assuming each VM is 30 GB, the uncompressed storage requirement for 150 full clones is 4500 GB. The
simple hybrid volume must be only 10% of this volume, which is 450 GB. To allow some room for growth and
the Atlantis VM, a 600 GB volume should be allocated.

Atlantis has some requirements on memory. The VM requires 5 GB, and 5% of the storage volume is used for
in memory metadata. In the example of a 450 GB volume, this amount is 22.5 GB. The default size for the RAM

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


20 version 1.2
cache is 15%, but it is recommended to increase this size to 25% for the write heavy VDI workload. In the
example of a 450 GB volume, this size is 112.5 GB.

Assuming 4 GB for the hypervisor, 144 GB (22.5 + 112.5 + 5 + 4) of system memory should be reserved. It is
recommended that at least 384 GB of server memory is used for USX simple hybrid VDI deployments. Proof of
concept (POC) testing can help determine the actual amount of RAM.

Table 21 lists the recommended number of virtual desktops per server for different VM memory sizes.

Table 21: Recommended number of virtual desktops per server with USX simple hybrid volumes
Processor E5-2680v3 E5-2680v3 E5-2680v3

VM memory size 2 GB (default) 3 GB 4 GB

Total system memory 384 GB 512 GB 768 GB

Reserved system memory 144 GB 144 GB 144 GB

System memory for desktop VMs 240 GB 368 GB 624 GB

Desktops per server (normal mode) 100 100 100

Desktops per server (failover mode) 120 120 120

Table 22 lists the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and
768 GB system memory is used for 4 GB VMs.

Table 22: Compute servers needed for different numbers of users with USX simple hybrid volumes
600 users 1500 users 4500 users 10000 users

Compute servers for 100 users (normal) 6 15 45 100

Compute servers for 120 users (failover) 5 13 38 84

Failover ratio 5:1 6.5:1 5:1 5:1

As a result of the use of Atlantis simple hybrid volumes, the only read operations are to fill the cache for the first
time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent
storage are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS
count is substantially reduced.

Assuming the use of a fast, low-latency shared storage device, a single VM boot can take 20 - 25 seconds to
get past the display of the logon window and get all of the other services fully loaded. This process takes this
time because boot operations are mainly read operations, although the actual boot time can vary depending on
the VM. Citrix XenDesktop boots VMs in batches of 10 at a time, which reduces IOPS for most storage
systems but is actually an inhibitor for Atlantis simple hybrid volumes. Without the use of XenDesktop, a boot of
100 VMs in a single data store is completed in 3.5 minutes; that is, only 11 times longer than the boot of a
single VM and far superior to existing storage solutions.

Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases,
the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


21 version 1.2
every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple
logins underway and the main bottleneck is the processor.

4.4 Compute servers for hosted shared desktops


This section describes compute servers for hosted shared desktops that use Citrix XenDesktop 7.x. This
functionality was in the separate XenApp product, but it is now integrated into the Citrix XenDesktop product.
Hosted shared desktops are more suited to task workers that require little in desktop customization.

As the name implies, multiple desktops share a single VM; however, because of this sharing, the compute
resources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient for
servers with two processors.

Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore,
four VMs are recommended to reduce the license costs for Windows Server 2012 R2.

For more information, see “BOM for hosted desktops” section on page 52.

4.4.1 Intel Xeon E5-2600 v3 processor family servers


Table 23 lists the processor performance results for different size workloads that use four Windows Server
2012 R2 VMs with the Xeon E5-2600v3 series processors and ESXi 6.0 hypervisor.

Table 23: ESXi 6.0 results for shared hosted desktops


Processor Hypervisor Workload Hosted Desktops

Two E5-2650 v3 2.30 GHz, 10C 105W ESXi 6.0 Office Worker 222 users

Two E5-2670 v3 2.30 GHz, 12C 120W ESXi 6.0 Office Worker 255 users

Two E5-2680 v3 2.50 GHz, 12C 120W ESXi 6.0 Office Worker 264 users

Two E5-2690 v3 2.60 GHz, 12C 135W ESXi 6.0 Office Worker 280 users

Two E5-2680 v3 2.50 GHz, 12C 120W ESXi 6.0 Knowledge Worker 231 users

Two E5-2690 v3 2.50 GHz, 12C 120W ESXi 6.0 Knowledge Worker 237 users

Table 24 lists the processor performance results for different size workloads that use four Windows Server
2012 R2 VMs with the Xeon E5-2600v3 series processors and XenServer 6.5 hypervisor.

Table 24: XenServer 6.5 results for shared hosted desktops


Processor Hypervisor Workload Hosted Desktops

Two E5-2650 v3 2.30 GHz, 10C 105W XenServer 6.5 Office Worker 225 users

Two E5-2670 v3 2.30 GHz, 12C 120W XenServer 6.5 Office Worker 243 users

Two E5-2680 v3 2.50 GHz, 12C 120W XenServer 6.5 Office Worker 262 users

Two E5-2690 v3 2.60 GHz, 12C 135W XenServer 6.5 Office Worker 271 users

Two E5-2680 v3 2.50 GHz, 12C 120W XenServer 6.5 Knowledge Worker 223 users

Two E5-2690 v3 2.50 GHz, 12C 120W XenServer 6.5 Knowledge Worker 226 users

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


22 version 1.2
Table 25 lists the processor performance results for different size workloads that use four Windows Server
2012 R2 VMs with the Xeon E5-2600v3 series processors and Hyper-V Server 2012 R2.

Table 25: Hyper-V Server 2012 R2 results for shared hosted desktops
Processor Hypervisor Workload Hosted Desktops

Two E5-2650 v3 2.30 GHz, 10C 105W Hyper-V Office Worker 337 users

Two E5-2680 v3 2.50 GHz, 12C 120W Hyper-V Office Worker 295 users

Two E5-2680 v3 2.50 GHz, 12C 120W Hyper-V Knowledge Worker 264 users

Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on
that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends
204 hosted desktops per server. It is important to keep 25% headroom on servers to cope with possible failover
scenarios. Lenovo recommends a general failover ratio of 5:1.

Table 26 lists the processor usage for the recommended number of users.

Table 26: Processor usage


Processor Workload Users per Server Utilization

Two E5-2650 v3 Office worker 170 – normal node 78%

Two E5-2650 v3 Office worker 204 – failover mode 88%

Two E5-2680 v3 Knowledge worker 170 – normal node 65%

Two E5-2680 v3 Knowledge worker 204 – failover mode 78%

Table 27 lists the number of compute servers that are needed for different numbers of users. Each compute
server has 128 GB of system memory for the four VMs.

Table 27: Compute servers needed for different numbers of users and VM sizes
600 users 1500 users 4500 users 10000 users

Compute servers for 170 users (normal) 4 10 27 59

Compute servers for 204 users (failover) 3 8 22 49

Failover ratio 3:1 4:1 4.5:1 5:1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


23 version 1.2
4.5 Compute servers for SMB virtual desktops
This section presents an alternative for compute servers to be used in an SMB environment that can be more
cost-effective than the enterprise environment that is described in 4.3 Compute servers for virtual desktops and
4.4 Compute servers for hosted shared desktops. The following methods can be used to cut costs from a VDI
solution:

 Use a more restrictive XenDesktop license


 Use hypervisor with a lower license cost
 Do not use dedicated management servers for management VMs
 Use local HDDs in compute servers and use shared storage only when necessary

The Citrix XenDesktop VDI edition has fewer features than the other XenDesktop versions and might be
sufficient for the customer SMB environment. For more information and a comparison, see this website:
citrix.com/go/products/xendesktop/feature-matrix.html

Citrix XenServer has no other license cost and is an alternative to other hypervisors. The performance
measurements in this section show XenServer and ESXi.

Providing that there is some kind of HA for the management server VMs, the number of compute servers that
are needed can be reduced at the cost of less user density. There is a cross-over point on the number of users
where it makes sense to have dedicated compute servers for the management VMs. That cross-over point
varies by customer, but often it is in the range of 300 - 600 users. A good assumption is a reduction in user
density of 20% for the management VMs; for example, 125 users reduces to 100 per compute server.

Shared storage is expensive. Some shared storage is needed to ensure user recovery if there is a failure and
the IBM Storwize V3700 with an iSCSI connection is recommended. Dedicated virtual desktops always must
be on a server in case of failure, but stateless virtual desktops can be provisioned to HDDs on the local server.
Only the user data and profile information must be on shared storage.

4.5.1 Intel Xeon E5-2600 v3 processor family servers with local storage
The performance measurements and recommendations in this section are for the use of stateless virtual
machines with local storage on the compute server. For more information about for persistent virtual desktops
as persistent users must use shared storage for resilience, see “Compute servers for virtual desktops” on page
14.

XenServer 6.5 formats a local datastore by using the LVMoHBA file system. As a consequence XenServer 6.5
supports only thick provisioning and not thin provisioning. This fact means that VMs that are created with MCS
are large and take up too much local disk space. Instead, only PVS was used to provision stateless virtual
desktops.

Table 28 shows the processor performance results for the Xeon E5-2650v3 series of processors by using
stateless virtual machines on local HDDs with XenServer 6.5. This configuration used 12 or 16 300 GB 15k
RPM HDDs in a RAID 10 array. Measurements showed that 8 HDDs is not sufficient for the required IOPS and
the capacity requirements.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


24 version 1.2
Table 28: Performance results for PVS stateless virtual desktops with XenServer 6.5
Processor Workload 12 HDDs 16 HDDs

Two E5-2680 v3 2.50 GHz, 12C 120W Office worker 211 users 211 users

Two E5-2680 v3 2.50 GHz, 12C 120W Knowledge worker 117 users 162 users

For Office worker, 12 HDDs have sufficient IOPS but the storage capacity with 300 GB HDDs was 96%. If the
number of VMs is reduced to 125, there should be sufficient capacity for a default 6GB write cache per VM with
some room left over for VM growth. For VMs that are larger than 30 GB, it is recommended that 600 GB 15k
RPM HDDs are used.

For Knowledge worker, 12 HDDs do not have sufficient IOPS. In this case, at least 16 HDDs should be used.

Because an SMB environment is used with a range of user sizes from less than 75 to 600, the following
configurations are recommended:

 For small user counts, each server can support 75 users at most by using two E5-2650v3 processors,
256 GB of memory, and 12 HDDs. The extra memory is needed for the VM for management servers.
 For average user counts, each server supports 125 users at most by using two E5-2650v3 processors,
256 GB of memory, and 16 HDDs.
These configurations also need two extra HDDs for installing XenServer 6.5.
Table 29 shows the number of compute servers that are needed for different number of medium users.

Table 29: SMB Compute servers needed for different numbers of users and VM sizes
75 users 150 users 250 users 500 users

Compute servers for 75 users (normal) 2 3 N/A N/A

Compute servers for 75 users (failover) 1 2 N/A N/A

Compute servers for 100 users (normal) N/A N/A 3 5

Compute servers for 125 users (failover) N/A N/A 2 4

Includes VMs for management servers Yes Yes No No

For more information, see “BOM for SMB compute servers” on page 47.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


25 version 1.2
4.6 Compute servers for hyper-converged systems
This section presents the compute servers for Atlantis USX hyper-converged system. Additional processing
and memory is often required to support storing data locally, as well as additional SSDs or HDDs. Typically
HDDs are used for data capacity and SSDs and memory is used for provide performance. As the price per GB
for flash memory continues to reduce, there is a trend to also use all SSDs to provide the overall best
performance for hyper-converged systems.

For more information, see “BOM for hyper-converged compute servers” on page 56.

4.6.1 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX
Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5
servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was
installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across
the four servers that were running ESXi 5.5 U2.

This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see
the difference if one server is unavailable. Table 30 lists the processor usage for the recommended number of
users.

Table 30: Processor usage for Atlantis USX


Processor Workload Servers Users per Server Utilization

Two E5-2680 v3 Knowledge worker 4 125 – normal node 66%

Two E5-2680 v3 Knowledge worker 3 167 – failover mode 89%

From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per
server in failover mode. Lenovo recommends a general failover ratio of 5:1.

Table 31 lists the recommended number of virtual desktops per server for different VM memory sizes.

Table 31: Recommended number of virtual desktops per server for Atlantis USX
Processor E5-2680 v3 E5-2680 v3 E5-2680 v3

VM memory size 2 GB (default) 3 GB 4 GB

System memory 384 GB 512 GB 768 GB

Memory for ESXi and Atlantis USX 63 GB 63 GB 63 GB

Memory for VMs 321 GB 449 GB 705 GB

Desktops per server (normal mode) 125 125 125

Desktops per server (failover mode) 150 150 150

Table 32 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.

Table 32: Compute servers needed for different numbers of users for Atlantis USX

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


26 version 1.2
Desktop memory size 300 users 600 users 1500 users 3000 users

Compute servers for 125 users (normal) 4 5 12 24

Compute servers for 150 users (failover) 3 4 10 20

Failover ratio 3:1 4:1 5:1 5:1

An important part of a hyper-converged system is the resiliency to failures when a compute server is
unavailable. Login VSI was run and then the compute server was powered off. This process was done during
the steady state phase. For the steady state phase, 114 - 120 VMs were migrated from the “failed” server to the
other three servers with each server gaining 38 - 40 VMs.

Figure 7 shows the processor usage for the four servers during the steady state phase when one of the servers
is powered off. The processor spike for the three remaining servers is noticeable.

Figure 7: Atlantis USX processor usage: server power off

There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure, yet
VSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancy
at multiple levels for all mission critical systems.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


27 version 1.2
4.7 Graphics Acceleration
The XenServer 6.5 hypervisor supports the following options for graphics acceleration:

 Dedicated GPU with one GPU per user which is called pass-through mode.
 GPU hardware virtualization (vGPU) that partitions each GPU for 1 to 8 users.

The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using
the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers
supports up to two GRID adapters. No significant performance differences were found between these two
servers when they were used for graphics acceleration and the results apply to both.

Because pass-through mode offers a low user density (eight for GRID K1 and four for GRID K2), it is
recommended that this mode is only used for power users, designers, engineers, or scientists that require
powerful graphics acceleration.

Lenovo recommends that a high-powered CPU, such as the E5-2680v3, is used for pass-through mode and
vGPU because accelerated graphics tends to put an extra load on the processor. For the pass-through mode,
with only four or eight users per server, 128 GB of server memory should be sufficient even for the high end
GRID K2 users who might need 16 GB or even 24 GB per VM. See also NVidia’s release notes on vGPU for
Citrix XenServer 6.0:

The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image
quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or
knowledge workers usually have less intense graphics workloads and can achieve higher frame rates. All tests
were done by using the default H.264 compatibility display mode for the clients. This configuration offers the
best compromise between data size (compression) and performance.

Table 33 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user
with the GRID K1 adapter by using pass-through mode with DirectX 11.

Table 33: Performance of GRID K1 pass-through mode by using DirectX 11


Quality Tessellation Anti-Aliasing Resolution K1

High Normal 0 1024x768 14.3


High Normal 0 1280x768 12.3
High Normal 0 1280x1024 11.4
High Normal 0 1680x1050 7.8
High Normal 0 1920x1200 5.5

Table 34 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2
adapter by using pass-through mode with DirectX 11.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


28 version 1.2
Table 34: Performance of GRID K2 pass-through mode by using DirectX 11
Quality Tessellation Anti-Aliasing Resolution K2

High Normal 0 1680x1050 52.6

Ultra Extreme 8 1680x1050 29.2


Ultra Extreme 8 1920x1080 25.9
Ultra Extreme 8 1920x1200 23.9
Ultra Extreme 8 2560x1600 14.8

Table 35 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K1
adapter by using vGPU mode with DirectX 11. The K180Q profile has similar performance to the K1
pass-through mode.

Table 35: Performance of GRID K1 vGPU modes by using DirectX 11


Quality Tessellation Anti-Aliasing Resolution K180Q K160Q K140Q
High Normal 0 1024x768 14.3 7.8 4.4
High Normal 0 1280x768 12.3 6.7 3.7
High Normal 0 1280x1024 11.4 5.5 3.1
High Normal 0 1680x1050 7.8 4.3 N/A
High Normal 0 1920x1200 5.5 3.5 N/A

Table 36 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2
adapter by using vGPU mode with DirectX 11. The K280Q profile has similar performance to the K2
pass-through mode.

Table 36: Performance of GRID K2 vGPU modes by using DirectX 11


Quality Tessellation Anti-Aliasing Resolution K280Q K260Q K240Q
High Normal 0 1680x1050 52.6 26.3 13.3
High Normal 0 1920x1200 N/A 20.3 10.0
Ultra Extreme 8 1680x1050 29.2 N/A N/A

Ultra Extreme 8 1920x1080 25.9 N/A N/A

Ultra Extreme 8 1920x1200 23.9 11.5 N/A

Ultra Extreme 8 2560x1600 14.8 N/A N/A

The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality,
tessellation, and anti-aliasing options. This result is expected because of the relative performance
characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution
increases.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


29 version 1.2
Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is
done in the customer environment to verify the performance for the required user workloads.

For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650
M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs:

 “BOM for enterprise compute servers” on page 47


 “BOM for SMB compute servers” on page 56
 “BOM for hyper-converged compute servers” on page 57

4.8 Management servers


Management servers should have the same hardware specification as compute servers so that they can be
used interchangeably in a worst-case scenario. The Citrix XenDesktop management servers also use the
same hypervisor, but have management VMs instead of user desktops. Table 37 lists the VM requirements and
performance characteristics of each management service for Citrix XenDesktop.

Table 37: Characteristics of XenDesktop and ESXi management services


Management Virtual System Storage Windows HA Performance
service VM processors memory OS needed characteristic

Delivery 4 8 GB 60 GB 2012 R2 Yes 5000 user connections


controller

Web Interface 4 4 GB 60 GB 2012 R2 Yes 30,000 connections per


hour

Citrix licensing 2 4 GB 60 GB 2012 R2 No 170 licenses per second


server

XenDesktop 2 8 GB 60 GB 2012 R2 Yes 5000 users


SQL server

PVS servers 4 32 GB 60 GB 2012 R2 Yes Up to 1000 desktops,


(depends memory should be a
on number minimum of 2 GB plus
of images) 1.5 GB per image served

vCenter server 8 16 GB 60 GB 2012 R2 No Up to 2000 desktops

vCenter SQL 4 8 GB 200 GB 2012 R2 Yes Double the virtual


server processors and memory
for more than 2500 users

PVS servers often are run natively on Windows servers. The testing showed that they can run well inside a VM,
if it is sized per Table 37. The disk space for PVS servers is related to the number of provisioned images.

Table 38 lists the number of management VMs for each size of users following the high availability and
performance characteristics. The number of vCenter servers is half of the number of vCenter clusters because
each vCenter server can handle two clusters of up to 1000 desktops.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


30 version 1.2
Table 38: Management VMs needed
XenDesktop management service VM 600 users 1500 users 4500 users 10000 users

Delivery Controllers 2 (1+1) 2 (1+1) 2 (1+1) 2(1+1)


Includes Citrix Licensing server Y N N N
Includes Web server Y N N N

Web Interface N/A 2 (1+1) 2 (1+1) 2 (1+1)

Citrix licensing servers N/A 1 1 1

XenDesktop SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1)

PVS servers for stateless case only 2 (1+1) 4 (2+2) 8 (6+2) 14 (10+4)

ESXi management service VM 600 users 1500 users 4500 users 10000 users

vCenter servers 1 1 3 7

vCenter SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1)

Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough
capacity in the management servers for all of these VMs. Table 39 lists an example mapping of the
management VMs to the four physical management servers for 4500 users.

Table 39: Management server VM mapping (4500 users)


Management service for 4500 Management Management Management Management
stateless users server 1 server 2 server 3 server 4

vCenter servers (3) 1 1 1

vCenter database (2) 1 1

XenDesktop database (2) 1 1

Delivery controller (2) 1 1

Web Interface (2) 1 1

License server (1) 1

PVS servers for stateless desktops (8) 2 2 2 2

It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol
(DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment.

For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O
servers that support CIFS or NFS shares and translate file requests to the block storage system. For high
availability, two or more Windows storage servers are clustered.

Based on the number and type of desktops, Table 40 lists the recommended number of physical management
servers. In all cases, there is redundancy in the physical management servers and the management VMs.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


31 version 1.2
Table 40: Management servers needed
Management servers 600 users 1500 users 4500 users 10000 users

Stateless desktop model 2 2 4 7

Dedicated desktop model 2 2 2 4

Windows Storage Server 2012 2 2 3 4

For more information, see “BOM for enterprise and SMB management servers” on page 60.

4.9 Systems management


Lenovo XClarity™ Administrator is a centralized resource management solution that reduces complexity,
speeds up response, and enhances the availability of Lenovo® server systems and solutions.

The Lenovo XClarity Administrator provides agent-free hardware management for Lenovo’s System x® rack
servers and Flex System™ compute nodes and components, including the Chassis Management Module
(CMM) and Flex System I/O modules. Figure 8 shows the Lenovo XClarity administrator interface, where Flex
System components and rack servers are managed and are seen on the dashboard. Lenovo XClarity
Administrator is a virtual appliance that is quickly imported into a virtualized environment server configuration.

Figure 8: XClarity Administrator interface

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


32 version 1.2
4.10 Shared storage
VDI workloads such as virtual desktop provisioning, VM loading across the network, and access to user
profiles and data files place huge demands on network shared storage.

Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performance
takes precedence over storage capacity. This precedence means that more slower speed drives are needed to
match the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm),
there can still be excess capacity in the storage system because extra spindles are needed to provide the
IOPS performance. From experience, this extra storage is more than sufficient for the other types of data such
as SQL databases and transaction logs.

The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can be
ameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations are
based on the peak performance requirement, which usually occurs during the so-called “logon storm.” This is
when all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time.

It is always recommended that user data files (shared folders) and user profile data are stored separately from
the user image. By default, this has to be done for stateless virtual desktops and should also be done for
dedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent access
to user data and profiles.

Stateless virtual desktops are provisioned from shared storage using PVS. The PVS write cache is maintained
on a local SSD. Table 41 summarizes the peak IOPS and disk space requirements for stateless virtual
desktops on a per-user basis.

Table 41: Stateless virtual desktop shared storage performance requirements


Stateless virtual desktops Protocol Size IOPS Write %

vSwap (recommended to be disabled) NFS or Block 0 0 0

User data files CIFS/NFS 5 GB 1 75%

User profile (through MSRP) CIFS 100 MB 0.8 75%

Table 42 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual
desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of
disk space. Stateless users that require mobility and have no local SSDs also fall into this category. The last
three rows of Table 42 are the same as Table 41 for stateless desktops.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


33 version 1.2
Table 42: Dedicated or shared stateless virtual desktop shared storage performance requirements
Dedicated virtual desktops Protocol Size IOPS Write %

Master image NFS or Block 30 GB

Difference disks 18 85%


NFS or Block 10 GB
User “AppData” folder

vSwap (recommended to be disabled) NFS or Block 0 0 0

User files CIFS/NFS 5 GB 1 75%

User profile (through MSRP) CIFS 100 MB 0.8 75%

The sizes and IOPS for user data files and user profiles that are listed in Table 41 and Table 42 can vary
depending on the customer environment. For example, power users might require 10 GB and five IOPS for
user files because of the applications they use. It is assumed that 100% of the users at peak load times require
concurrent access to user data files and profiles.

Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for
dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any
storage controller configuration.

The storage configurations that are presented in this section include conservative assumptions about the VM
size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most
demanding user scenarios.

This reference architecture describes the following different shared storage solutions:

 Block I/O to IBM Storwize® V7000 / Storwize V3700 storage using Fibre Channel (FC)
 Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE)
 Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System
Interface (iSCSI)
 Block I/O to IBM FlashSystem 840 with Atlantis USX storage acceleration

4.10.1 IBM Storwize V7000 and IBM Storwize V3700 storage


The IBM Storwize V7000 generation 2 storage system supports up to 504 drives by using up to 20 expansion
enclosures. Up to four controller enclosures can be clustered for a maximum of 1056 drives (44 expansion
enclosures). The Storwize V7000 generation 2 storage system also has a 64 GB cache, which is expandable to
128 GB.

The IBM Storwize V3700 storage system is somewhat similar to the Storwize V7000 storage, but is restricted
to a maximum of five expansion enclosures for a total of 120 drives. The maximum size of the cache for the
Storwize V3700 is 8 GB.

The Storwize cache acts as a read cache and a write-through cache and is useful to cache commonly used
data for VDI workloads. The read and write cache are managed separately. The write cache is divided up
across the storage pools that are defined for the Storwize storage system.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


34 version 1.2
In addition, Storwize storage offers the IBM Easy Tier® function, which allows commonly used data blocks to
be transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used,
which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space is
used for SSDs to give the best balance between price and performance.

The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives can
be used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops and
desktop images.

To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, as
described in “Management servers” on page 30.

The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storage
performance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs or
alternatives (such as a flash storage system) are required.

For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data and
uses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB
10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. If
users need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drives
can be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for shared
folders and profile data.

Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linked
clones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drives
configured as RAID 10 were not sufficient and extra drives were required to achieve the necessary
performance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktops
and shared folders and profile data.

Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VM
master images. This configuration help with performance of provisioning virtual desktops; that is, a “boot storm”.
Each master image requires at least double its space. The actual number of SSDs in the array depends on the
number and size of images. In general, more users require more images.

Table 43: VM images and SSDs


600 users 1500 users 4500 users 10000 users

Image size 30 GB 30 GB 30 GB 30 GB

Number of master images 2 4 8 16

Required disk space (doubled) 120 GB 240 GB 480 GB 960 GB

400 GB SSD configuration RAID 1 (2) RAID 1 (2) Two RAID 1 Four RAID 1
arrays (4) arrays (8)

Table 44 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one
Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 44, the
IBM Storwize V3700 storage system can support up to 7000 users only.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


35 version 1.2
Table 44: Storwize storage configuration for stateless users
Stateless storage 600 users 1500 users 4500 users 10000 users

400 GB SSDs in RAID 1 for master images 2 2 4 8

Hot spare SSDs 2 2 4 4

600 GB 10k rpm in RAID 10 for users 12 28 80 168

Hot spare 600 GB drives 2 2 4 12

Storwize control enclosures 1 1 1 1

Storwize expansion enclosures 0 1 3 7

Table 45 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless
user counts. The top four rows of Table 45 are the same as for stateless desktops. Lenovo recommends
clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500
or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally across
all of the controllers. Based on the assumptions in Table 45, the IBM Storwize V3700 storage system can
support up to 1200 users.

Table 45: Storwize storage configuration for dedicated or shared stateless users
Dedicated or shared stateless storage 600 users 1500 users 4500 users 10000 users

400 GB SSDs in RAID 1 for master images 2 2 4 8

Hot spare SSDs 2 2 4 8

600 GB 10k rpm in RAID 10 for users 12 28 80 168

Hot spare 600 GB 10k rpm drives 2 2 4 12

300 GB 15k rpm in RAID 10 for persistent 40 104 304 672


desktops

Hot spare 300 GB 15k rpm drives 2 4 4 12

400 GB SSDs for Easy Tier 4 12 32 64

Storwize control enclosures 1 1 2 4

Storwize expansion enclosures 2 6 16 (2 x 8) 36 (4 x 9)

Refer to the “BOM for shared storage” on page 64 for more details.

4.10.2 IBM FlashSystem 840 with Atlantis USX storage acceleration


The IBM FlashSystem 840 storage has low latencies and supports high IOPS. It can be used for solutions;
however, it is not cost effective. However, if the Atlantis simple hybrid VM is used to provide storage
optimization (capacity reduction and IOPS reduction), it becomes a much more cost-effective solution.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


36 version 1.2
Each FlashSystem 840 storage device supports up to 20 GB or 40 GB of storage, depending on the size of the
flash modules. To maintain the integrity and redundancy of the storage, it is recommended to use RAID 5. It is
not recommended to use this device for user counts because it is not cost-efficient.

Persistent virtual desktops require the most storage space and are the best candidate for this storage device.
The device also can be used for user folders, snap clones, and image management, although these items can
be placed on other slower shared storage.

The amount of required storage for persistent virtual desktops varies and depends on the environment. Table
46 is provided for guidance purposes only.

Table 46: FlashSystem 840 storage configuration for dedicated users with Atlantis USX
Dedicated storage 1000 users 3000 users 5000 users 10000 users

IBM FlashSystem 840 storage 1 1 1 1

2 TB flash module 4 8 12 0

4 TB flash module 0 0 0 12

Capacity 4 TB 12 TB 20 TB 40 TB

Refer to the “BOM for OEM storage hardware” on page 68 for more details.

4.11 Networking
The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the
shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or
16 Gbps FC, 10 GbE FCoE, or 10 GbE iSCSI connection is needed. Other types of storage can be network
attached by using 1 Gb or 10 Gb Ethernet.

Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb
Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this
website: lenovopress.com/tips1275.

Automated failover and redundancy of the entire network infrastructure and shared storage is important. This
failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths
between the compute servers, management servers, and shared storage.

If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR
switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required.

For more information, see “BOM for networking” on page 66.

4.11.1 10 GbE networking


For 10 GbE networking, the use of CIFS, NFS, or iSCSI, the Lenovo RackSwitch™ G8124E, and G8264R
TOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy and
automated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP)
and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used and
connected to a G8124 or G8264 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassis

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


37 version 1.2
or external connectivity. iSCSI also requires converged network adapters (CNAs) that have the LOM extension.
Table 47 lists the TOR 10 GbE network switches for each user size.

Table 47: TOR 10 GbE network switches needed


10 GbE TOR network switch 600 users 1500 users 4500 users 10000 users

G8124E – 24-port switch 2 2 0 0

G8264R – 64-port switch 0 0 2 2

4.11.2 10 GbE FCoE networking


FCoE on a 10 GbE network requires converged networking switches such as pairs of the CN4093 switch or the
Flex System chassis and the G8264CS TOR converged switch. The TOR converged switches are needed for
multiple Flex System chassis or for clustering of multiple IBM Storwize V7000 storage systems. FCoE also
requires CNAs that have the LOM extension. Table 48 summarizes the TOR converged network switches for
each user size.

Table 48: TOR 10 GbE converged network switches needed


10 GbE TOR network switch 600 users 1500 users 4500 users 10000 users

G8264CS – 64-port switch (including up 2 2 2 2


to 12 fibre channel ports)

4.11.3 Fibre Channel networking


Fibre Channel (FC) networking for shared storage requires switches. Pairs of the FC3171 8 Gbps FC or
FC5022 16 Gbps FC SAN switch are needed for the Flex System chassis. For top of rack, the Lenovo 3873 16
Gbps FC switches should be used. The TOR SAN switches are needed for multiple Flex System chassis or for
clustering of multiple IBM V7000 Storwize storage systems. Table 49 lists the TOR FC SAN network switches
for each user size.

Table 49: TOR FC network switches needed


Fibre Channel TOR network switch 600 users 1500 users 4500 users 10000 users

Lenovo 3873 AR2 – 24-port switch 2 2

Lenovo 3873 BR1 – 48-port switch 2 2

4.11.4 1 GbE administration networking


A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switches
are used for the IT administration network. Lenovo recommends that redundancy is also built into this network
at the switch level. At minimum, a second switch should be available in case a switch goes down. Table 50 lists
the number of 1 GbE switches for each user size.

Table 50: TOR 1 GbE network switches needed


1 GbE TOR network switch 600 users 1500 users 4500 users 10000 users

G8052 – 48 port switch 2 2 2 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


38 version 1.2
Table 51 shows the number of 1 GbE connections that are needed for the administration network and switches
for each type of device. The total number of connections is the sum of the device counts multiplied by the
number of each device.

Table 51: 1 GbE connections needed


Device Number of 1 GbE connections for administration

System x rack server 1

Flex System Enterprise Chassis CMM 2

Flex System Enterprise Chassis switches 1 per switch (optional)

IBM Storwize V7000 storage controller 4

TOR switches 1

4.12 Racks
The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that
is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex
System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the
total height of all of the components. For more information, see the “BOM for racks” section on page 67.

4.13 Deployment models


This section describes the following examples of different deployment models:

 Flex Solution with single Flex System chassis


 Flex System with 4500 stateless users
 System x server with Storwize V7000 and FCoE

4.13.1 Deployment example 1: Flex Solution with single Flex System chassis
As shown in Table 52, this example is for 1250 stateless users that are using a single Flex System chassis.
There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to
two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex
System chassis.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


39 version 1.2
Table 52: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes
Stateless virtual desktop 1250 users

x240 compute servers 10 IBM Storwize V7000


expansion enclosure
x240 management servers 2
IBM Storwize V7000
x240 Windows storage servers (WSS) 2 controller enclosure
Total 300 GB 15k rpm drives 40
Compute Compute
Hot spare 300 GB 15k drives 2
Compute Compute
Total 400 GB SSDs 6

Storwize V7000 controller enclosure 1 Compute Compute

Storwize V7000 expansion enclosures 1


Compute Compute
Flex System EN4093R switches 2
Compute Compute
Flex System FC5022 switches 2

Flex System Enterprise Chassis 1 Manage Manage

Total height 14U WSS WSS


Number of Flex System racks 1

4.13.2 Deployment example 2: Flex System with 4500 stateless users


As shown in Table 53, this example is for 4500 stateless users who are using a Flex System based chassis
with each of the 36 compute nodes supporting 125 users in normal mode and 150 in the failover case.

Table 53: Deployment configuration for 4500 stateless users


Stateless virtual desktop 4500 users
Compute servers 36
Management servers 4
V7000 Storwize controller 1
V7000 Storwize expansion 3
Flex System EN4093R switches 6
Flex System FC3171 switches 6
Flex System Enterprise Chassis 3
10 GbE network switches 2 x G8264R
1 GbE network switches 2 x G8052
SAN network switches 2 x SAN24B-5
Total height 44U
Number of racks 2

Figure 9 shows the deployment diagram for this configuration. The first rack contains the compute and
management servers and the second rack contains the shared storage.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


40 version 1.2
M1 VMs
TOR Switches
vCenter Server
vCenter SQL Server 2 x G8052
Desktop controller 2 x SAN24B-5
PVS Servers (2) 2 x G8124

M4 VMs
Cxx
Desktop SQL Server
Desktop controller Each compute
Web Server server has
PVS Servers (2) 125 user VMs

M2 VMs

vCenter Server
vCenter SQL Server
License Server
PVS Servers (2)

M3 VMs

vCenter Server
Desktop SQL Server
Web Server
PVS Servers (2)

Figure 9: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage

Figure 10 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System
Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle
and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the
purpose of clarity.

Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack
level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches
and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected
to each of the SAN24B-5 switches.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


41 version 1.2
Figure 10: Network diagram for 4500 stateless users using Storwize V7000 shared storage

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


42 version 1.2
4.13.3 Deployment example 3: System x server with Storwize V7000 and FCoE
This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are
stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the
average VM size is 2.1 GB.

Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24
compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute
server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to
384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs.

Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of
1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI
(A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade
and two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs.

In addition, there are three management servers. For interchangeability in case of server failure, these extra
servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi.

There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the
operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage
server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get
a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it
reduces the window of failure for the two critical Windows storage servers.

All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR
RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully
redundant. As an alternative, iSCSI with G8264 10GbE switches can be used.

For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users
require space for user folders and profile data. Stateless users need space for master images and persistent
users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which
substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be
taken offline and only have maintenance performed on it after all of the users are logged off rather than being
able to use vMotion. If a server crashes, this issue is immaterial.

It is estimated that this configuration requires the following IBM Storwize V7000 drives:

 Two 400 GB SSD RAID 1 for master images

 Thirty 300 GB 15K drives RAID 10 for persistent images

 Four 400 GB SSD Easy Tier for persistent images

 Sixty 600 GB 10K drives RAID 10 for user folders

This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion
enclosures.

Figure 11 shows the deployment configuration for this example in a single rack. Because the rack has 36 items,
it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12
C13 sockets.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


43 version 1.2
Figure 11: Deployment configuration for 3000 stateless users with System x servers

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


44 version 1.2
4.13.4 Deployment example 4: Hyper-converged using System x servers
This deployment example supports 750 dedicated virtual desktops with VMware VSAN in hybrid mode (SSDs
and HDDs). Each VM needs 3 GB of RAM. Assuming 125 users per server, six servers are needed with a
minimum of five servers in case of failures (5:1 ratio).

Each virtual desktop is 40 GB. The total required capacity is 86 GB, assuming full clones of 1500 VMs, one
complete mirror of the data (FFT=1), space of swapping, and 30% extra capacity. However, the use of linked
clones for VSAN means that a substantially smaller capacity is required and 43 TB is more than sufficient.

This example uses six System x3550 M5 1U servers and 2 G8124E RackSwitch TOR switches. Each server
has two E5-2680v3 CPUs, 512 GB of memory, two network cards, two 400 GB SSDs, and six 1.2TB HDDs.
Two disk groups of 1 SSD and 3 HDDs are used for VSAN.

Figure 12 shows the deployment configuration for this example.

B
SP
L/A
MB
A
MS

Stacking 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 10101 Reset Mgmt

B
SP
L/A
MB
A
MS

Stacking 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 10101 Reset Mgmt

0 1 2 3 4 5 6 7

3550 M5

0 1 2 3 4 5 6 7

3550 M5

0 1 2 3 4 5 6 7

3550 M5

0 1 2 3 4 5 6 7

3550 M5

0 1 2 3 4 5 6 7

3550 M5

0 1 2 3 4 5 6 7

3550 M5

Figure 12: Deployment configuration for 750 dedicated users using hyper-converged System x servers

4.13.5 Deployment example 5: Graphics acceleration for 120 CAD users


This example is for medium level computer-aided design (CAD) users. Each user requires a persistent virtual
desktop with 12 GB of memory and 80 GB of disk space. Two CAD users share a K2 GPU that uses virtual
GPU support. Therefore, a maximum of eight users can use a single NeXtScale nx360 M5 server with two
GRID K2 graphics adapters. Each server needs 128 GB of system memory that uses eight 16 GB DDR4
DIMMs.

For 120 CAD users, 15 servers are required and three extra servers are used for failover in case of a hardware
problem. In total, three NeXtScale n1200 chassis are needed for these 18 servers in 18U of rack space.

Storwize V7000 is used for shared storage for virtual desktops and for the CAD data store. This example uses
a total of six Storwize enclosures, each with 22 HDDs and four SSDs. Each HDD is a 15k rpm 600 GB and
each SDD is 400 GB. Assuming RAID 10 for the HDDs, the six enclosures can store approximately 36 TB of
data. A total of 12 400 SSDs is more than the recommended 10% of the data capacity and is used for Easytier
function to improve read and write performance.

The 120 VMs for the CAD users can use as much as 10 TB, but that use still leaves a reasonable 26 TB of
storage for CAD data.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


45 version 1.2
In this example, it is assumed that two redundant x3550 M5 servers are used to manage the CAD data store;
that is, all accesses for data from the CAD virtual desktop are routed into these servers and data is
downloaded and uploaded from the data store into the virtual desktops.

The total rack space for the compute servers, storage, and TOR switches for 1 GbE management network,
10 GbE user network and 8 Gbps FC SAN is 39U, as shown in Figure 13.

Figure 13: Deployment configuration for 120 CAD users using NeXtScale servers with GPUs.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


46 version 1.2
5 Appendix: Bill of materials
This appendix contains the bill of materials (BOMs) for different configurations of hardware for Citrix
XenDesktop deployments. There are sections for user servers, management servers, storage, networking
switches, chassis, and racks that are orderable from Lenovo. The last section is for hardware orderable from
an OEM.

The BOM lists in this appendix are not meant to be exhaustive and must always be double-checked with the
configuration tools. Any discussion of pricing, support, and maintenance options is outside the scope of this
document.

For connections between TOR switches and devices (servers, storage, and chassis), the connector cables are
configured with the device. The TOR switch configuration includes only transceivers or other cabling that is
needed for failover or redundancy.

5.1 BOM for enterprise compute servers


This section contains the bill of materials for enterprise compute servers.

Flex System x240


Code Description Quantity
9532AC1 Flex System node x240 M5 Base Model 1
A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5RM Flex System x240 M5 Compute Node 1
A5S0 System Documentation and Software-US English 1
A5SG Flex System x240 M5 2.5" HDD Backplane 1
A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1
A1BM Flex System FC3172 2-port 8Gb FC Adapter 1
A1BP Flex System FC5022 2-port 16Gb FC Adapter 1
Select SSD storage for stateless virtual desktops
5978 Select Storage devices - Lenovo-configured RAID 1
7860 Integrated Solid State Striping 1
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ Lenovo SD Media Adapter for System x 1
ASCH RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media) 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


47 version 1.2
System x3550 M5
Code Description Quantity
5463AC1 System x3550 M5 1
A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A58X System x3550 M5 8x 2.5" Base Chassis 1
A59V System x3550 M5 Planar 1
A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1
A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5AB System x Advanced LCD Light path Kit 1
A5AK System x3550 M5 Slide Kit G4 1
A59F System Documentation and Software-US English 1
A59W System x3550 M5 4x 2.5" HS HDD Kit 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select SSD storage for stateless virtual desktops
5978 Select Storage devices - Lenovo-configured RAID 1
2302 RAID configuration 1
A2K6 Primary Array - RAID 0 (2 drives required) 1
2499 Enable selection of Solid State Drives for Primary Array 1
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


48 version 1.2
System x3650 M5
Code Description Quantity
5462AC1 System x3650 M5 1
A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1
A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1
A4VH Lightpath LCD Op Panel 1
A5EY System Documentation and Software-US English 1
A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select SSD storage for stateless virtual desktops
5978 Select Storage devices - Lenovo-configured RAID 1
2302 RAID configuration 1
A2K6 Primary Array - RAID 0 (2 drives required) 1
2499 Enable selection of Solid State Drives for Primary Array 1
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


49 version 1.2
NeXtScale nx360 M5
Code Description Quantity
5465AC1 nx360 M5 Compute Node Base Model 1
A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5JU nx360 M5 Compute Node 1
A5JX nx360 M5 IMM Management Interposer 1
A1MK Lenovo Integration Management Module Standard Upgrade 1
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5KD nx360 M5 PCI bracket KIT 1
A5JF System Documentation and Software-US English 1
A5JZ nx360 M5 RAID Riser 1
A5V2 nx360 M5 2.5" Rear Drive Cage 1
A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1
A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1
A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1
A5JV nx360 M5 ML2 Riser 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select SSD storage for stateless virtual desktops
AS7E 400GB 12G SAS 2.5" MLC G3HS Enterprise SSD 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ Lenovo SD Media Adapter for System x 1
ASBZ 300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System 2

NeXtScale n1200 chassis

Code Description Quantity


5456HC1 n1200 Enclosure Chassis Base Model 1
A41D n1200 Enclosure Chassis 1
A4MM CFF 1300W Power Supply 6
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6
A42S System Documentation and Software - US English 1
A4AK KVM Dongle Cable 1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


50 version 1.2
NeXtScale nx360 M5 with graphics acceleration
Code Description Quantity
5465AC1 nx360 M5 Compute Node Base Model 1
A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5JU nx360 M5 Compute Node 1
A4MB NeXtScale PCIe Native Expansion Tray 1
A5JX nx360 M5 IMM Management Interposer 1
A1MK Lenovo Integration Management Module Standard Upgrade 1
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5KD nx360 M5 PCI bracket KIT 1
A5JF System Documentation and Software-US English 1
A5JZ nx360 M5 RAID Riser 1
A5V2 nx360 M5 2.5" Rear Drive Cage 1
A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1
A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1
A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1
A5JV nx360 M5 ML2 Riser 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select SSD storage for stateless virtual desktops
AS7E 400GB 12G SAS 2.5" MLC G3HS Enterprise SSD 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
Select GRID K1 or GRID K2 for graphics acceleration
A3GM NVIDIA GRID K1 2
A3GN NVIDIA GRID K2 2
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ Lenovo SD Media Adapter for System x 1
ASBZ 300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


51 version 1.2
5.2 BOM for hosted desktops
Table 27 on page 23 lists the number of management servers that are needed for the different numbers of
users and this section contains the corresponding bill of materials.

Flex System x240


Code Description Quantity
9532AC1 Flex System node x240 M5 Base Model 1
A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5RM Flex System x240 M5 Compute Node 1
A5S0 System Documentation and Software-US English 1
A5SG Flex System x240 M5 2.5" HDD Backplane 1
A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1
A1BM Flex System FC3172 2-port 8Gb FC Adapter 1
A1BP Flex System FC5022 2-port 16Gb FC Adapter 1
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ Lenovo SD Media Adapter for System x 1
ASCH RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media) 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


52 version 1.2
System x3550 M5
Code Description Quantity
5463AC1 System x3550 M5 1
A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A58X System x3550 M5 8x 2.5" Base Chassis 1
A59V System x3550 M5 Planar 1
A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1
A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5AB System x Advanced LCD Light path Kit 1
A5AK System x3550 M5 Slide Kit G4 1
A59F System Documentation and Software-US English 1
A59W System x3550 M5 4x 2.5" HS HDD Kit 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


53 version 1.2
System x3650 M5
Code Description Quantity
5462AC1 System x3650 M5 1
A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1
A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1
A4VH Lightpath LCD Op Panel 1
A5EY System Documentation and Software-US English 1
A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex b10GbE Virtual Fabric Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


54 version 1.2
NeXtScale nx360 M5
Code Description Quantity
5465AC1 nx360 M5 Compute Node Base Model 1
A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5JU nx360 M5 Compute Node 1
A5JX nx360 M5 IMM Management Interposer 1
A1MK Lenovo Integration Management Module Standard Upgrade 1
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5KD nx360 M5 PCI bracket KIT 1
A5JF System Documentation and Software-US English 1
A5JZ nx360 M5 RAID Riser 1
A5V2 nx360 M5 2.5" Rear Drive Cage 1
A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1
A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1
A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1
A5JV nx360 M5 ML2 Riser 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC 1
A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select amount of system memory 1
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ Lenovo SD Media Adapter for System x 1
ASBZ 300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System 2

NeXtScale n1200 chassis

Code Description Quantity


5456HC1 n1200 Enclosure Chassis Base Model 1
A41D n1200 Enclosure Chassis 1
A4MM CFF 1300W Power Supply 6
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6
A42S System Documentation and Software - US English 1
A4AK KVM Dongle Cable 1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


55 version 1.2
5.3 BOM for SMB compute servers
Table 29 on page 25 shows the configuration and number of SMB compute servers needed for different
numbers of stateless users and this section contains the corresponding bill of materials. See “BOM for
enterprise compute servers” on page 47 for the bill of materials to use with dedicated users.

System x3650 M5
Code Description Quantity
5462AC1 System x3650 M5 1
A5GU Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W 1
A5EM Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5EY System Documentation and Software-US English 1
A5GG x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
2302 RAID Configuration 1
A2KB Primary Array - RAID 10 (minimum of 4 drives required) 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 14
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


56 version 1.2
5.4 BOM for hyper-converged compute servers
This section contains the bill of materials for hyper-converged compute servers.

System x3550 M5

Code Description Quantity


5463AC1 System x3550 M5 1
A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A58X System x3550 M5 8x 2.5" Base Chassis 1
A59V System x3550 M5 Planar 1
A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1
A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5AB System x Advanced LCD Light path Kit 1
A5AK System x3550 M5 Slide Kit G4 1
A59F System Documentation and Software-US English 1
A59W System x3550 M5 4x 2.5" HS HDD Kit 1
A59X System x3550 M5 4x 2.5" HS HDD Kit PLUS 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 1
5977 Select Storage devices - no Lenovo-configured RAID 1
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 6
2498 Install largest capacity, faster drives starting in Array 1 1
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


57 version 1.2
System x3650 M5
Code Description Quantity
5462AC1 System x3650 M5 1
A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5EW System x 900W High Efficiency Platinum AC Power Supply 2
6400 2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1
A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1
A4VH Lightpath LCD Op Panel 1
A5FV System x Enterprise Slides Kit 1
A5EY System Documentation and Software-US English 1
A5GG x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 2
A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 2
5977 Select Storage devices - no Lenovo-configured RAID 1
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1
Select amount of system memory
A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 14
2498 Install largest capacity, faster drives starting in Array 1 1
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


58 version 1.2
System x3650 M5 with graphics acceleration
Code Description Quantity
5462AC1 System x3650 M5 1
A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5EW System x 900W High Efficiency Platinum AC Power Supply 2
6400 2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1
A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1
A4VH Lightpath LCD Op Panel 1
A5FV System x Enterprise Slides Kit 1
A5EY System Documentation and Software-US English 1
A5GG x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 2
A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 2
5977 Select Storage devices - no Lenovo-configured RAID 1
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1
A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 12
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4
A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2
A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 14
2498 Install largest capacity, faster drives starting in Array 1 1
Select GRID K1 or GRID K2 for graphics acceleration
AS3G NVIDIA Grid K1 (Actively Cooled) 2
A470 NVIDIA Grid K2 (Actively Cooled) 2
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7 32GB Enterprise Value USB Memory Key 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


59 version 1.2
5.5 BOM for enterprise and SMB management servers
Table 40 on page 32 lists the number of management servers that are needed for the different numbers of
users. To help with redundancy, the bill of materials for management servers must be the same as compute
servers. For more information, see “BOM for enterprise compute servers” on page 47.

Because the Windows storage servers use a bare-metal operating system (OS) installation, they require much
less memory and can have a reduced configuration as listed below.

Flex System x240

Code Description Quantity


9532AC1 Flex System node x240 M5 Base Model 1
A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5RM Flex System x240 M5 Compute Node 1
A5S0 System Documentation and Software-US English 1
A5SG Flex System x240 M5 2.5" HDD Backplane 1
5978 Select Storage devices - Lenovo-configured RAID 1
8039 Integrated SAS Mirroring - 2 identical HDDs required 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2
A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1
A1BM Flex System FC3172 2-port 8Gb FC Adapter 1
A1BP Flex System FC5022 2-port 16Gb FC Adapter 1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


60 version 1.2
System x3550 M5
Code Description Quantity
5463AC1 System x3550 M5 1
A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A58X System x3550 M5 8x 2.5" Base Chassis 1
A59V System x3550 M5 Planar 1
A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1
A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5AB System x Advanced LCD Light path Kit 1
A5AK System x3550 M5 Slide Kit G4 1
A59F System Documentation and Software-US English 1
A59W System x3550 M5 4x 2.5" HS HDD Kit 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
5978 Select Storage devices - Lenovo-configured RAID 1
A2K7 Primary Array - RAID 1 (2 drives required) 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2
A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


61 version 1.2
System x3650 M5
Code Description Quantity
5462AC1 System x3650 M5 1
A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5FD System x3650 M5 2.5" Base without Power Supply 1
A5EA System x3650 M5 Planar 1
A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1
A5AX System x 550W High Efficiency Platinum AC Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1
A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1
A4VH Lightpath LCD Op Panel 1
A5EY System Documentation and Software-US English 1
A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1
A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1
A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1
5978 Select Storage devices - Lenovo-configured RAID 1
A2K7 Primary Array - RAID 1 (2 drives required) 1
A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2
A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1
9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


62 version 1.2
NeXtScale nx360 M5
Code Description Quantity
5465AC1 nx360 M5 Compute Node Base Model 1
A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1
A5JU nx360 M5 Compute Node 1
A5JX nx360 M5 IMM Management Interposer 1
A1MK Lenovo Integration Management Module Standard Upgrade 1
A1ML Lenovo Integrated Management Module Advanced Upgrade 1
A5KD nx360 M5 PCI bracket KIT 1
A5JF System Documentation and Software-US English 1
5978 Select Storage devices - Lenovo-configured RAID 1
A2K7 Primary Array - RAID 1 (2 drives required) 1
ASBZ 300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System 2
A5JZ nx360 M5 RAID Riser 1
A5V2 nx360 M5 2.5" Rear Drive Cage 1
A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1
A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8
A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1
A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1
A5JV nx360 M5 ML2 Riser 1
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1
3591 Brocade 8Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2
A2XV Brocade 16Gb FC Dual-port HBA for System x 1
88Y6854 5m LC-LC fiber cable (networking) 2

NeXtScale n1200 chassis

Code Description Quantity


5456HC1 n1200 Enclosure Chassis Base Model 1
A41D n1200 Enclosure Chassis 1
A4MM CFF 1300W Power Supply 6
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6
A42S System Documentation and Software - US English 1
A4AK KVM Dongle Cable 1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


63 version 1.2
5.6 BOM for shared storage
This section contains the bill of materials for shared storage.

IBM Storwize V7000 storage

Table 44 and Table 45 on page 36 list the number of Storwize V7000 storage controllers and expansion units
that are needed for different user counts.

IBM Storwize V7000 expansion enclosure

Code Description Quantity


619524F IBM Storwize V7000 Disk Expansion Enclosure 1
AHE1 300 GB 2.5-inch 15K RPM SAS HDD 20
AHE2 600 GB 2.5-inch 15K RPM SAS HDD 0
AHF9 600 GB 10K 2.5-inch HDD 0
AHH2 400 GB 2.5-inch SSD (E-MLC) 4
AS26 Power Cord - PDU connection 1

IBM Storwize V7000 control enclosure

Code Description Quantity


6195524 IBM Storwize V7000 Disk Control Enclosure 1
AHE1 300 GB 2.5-inch 15K RPM SAS HDD 20
AHE2 600 GB 2.5-inch 15K RPM SAS HDD 0
AHF9 600 GB 10K 2.5-inch HDD 0
AHH2 400 GB 2.5-inch SSD (E-MLC) 4
AHCB 64 GB to 128 GB Cache Upgrade 2
AS26 Power Cord – PDU connection 1
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
AHB5 10Gb Ethernet 4 port Adapter Cards (Pair) 1
AHB1 8Gb 4 port FC Adapter Cards (Pair) 1

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


64 version 1.2
IBM Storwize V3700 storage
Table 44 and Table 45 on page 36 show the number of Storwize V3700 storage controllers and expansion units
that are needed for different user counts.

IBM Storwize V3700 control enclosure

Code Description Quantity


609924C IBM Storwize V3700 Disk Control Enclosure 1
ACLB 300 GB 2.5-inch 15K RPM SAS HDD 20
ACLC 600 GB 2.5-inch 15K RPM SAS HDD 0
ACLK 600 GB 10K 2.5-inch HDD 0
ACME 400 GB 2.5-inch SSD (E-MLC) 4
ACHB Cache 8 GB 2
ACFA Turbo Performance 1
ACFN Easy Tier 1
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
ACHM 10Gb iSCSI - FCoE 2 Port Host Interface Card 2
ACHK 8Gb FC 4 Port Host Interface Card 2
ACHS 8Gb FC SW SPF Transceivers (Pair) 2

IBM Storwize V3700 expansion enclosure

Code Description Quantity


609924E IBM Storwize V3700 Disk Expansion Enclosure 1
ACLB 300 GB 2.5-inch 15K RPM SAS HDD 20
ACLC 600 GB 2.5-inch 15K RPM SAS HDD 0
ACLK 600 GB 10K 2.5-inch HDD 0
ACME 400 GB 2.5-inch SSD (E-MLC) 4

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


65 version 1.2
5.7 BOM for networking
For more information about the number and type of TOR network switches that are needed for different user
counts, see the “Networking” section on page 37.

RackSwitch G8052
Code Description Quantity
7159G52 Lenovo System Networking RackSwitch G8052 (Rear to Front) 1
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
3802 1.5m Blue Cat5e Cable 3
A3KP Lenovo System Networking Adjustable 19" 4 Post Rail Kit 1

RackSwitch G8124E

Code Description Quantity


7159BR6 Lenovo System Networking RackSwitch G8124E (Rear to Front) 1
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
3802 1.5m Blue Cat5e Cable 1
A1DK Lenovo 19" Flexible 4 Post Rail Kit 1

RackSwitch G8264

Code Description Quantity


7159G64 Lenovo System Networking RackSwitch G8264 (Rear to Front) 1
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A3KP Lenovo System Networking Adjustable 19" 4 Post Rail Kit 1
5053 SFP+ SR Transceiver 2
A1DP 1m QSFP+-to-QSFP+ cable 1
A1DM 3m QSFP+ DAC Break Out Cable 0

RackSwitch G8264CS

Code Description Quantity


7159DRX Lenovo System Networking RackSwitch G8264CS (Rear to Front) 1
6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
A2ME Hot-Swappable, Rear-to-Front Fan Assembly Spare 2
A1DK Lenovo 19" Flexible 4 Post Rail Kit 1
A1DP 1m QSFP+-to-QSFP+ cable 1
A1DM 3m QSFP+ DAC Break Out Cable 0
5075 8Gb SFP + SW Optical Transceiver 12

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


66 version 1.2
5.8 BOM for racks
The rack count depends on the deployment model. The number of PDUs assumes a fully-populated rack.
Flex System rack

Code Description Quantity


9363RC4, A3GR IBM PureFlex System 42U Rack 1
5897 IBM 1U 9 C19/3 C13 Switched and Monitored 60A 3 Phase PDU 6

System x rack

Code Description Quantity


9363RC4, A1RC IBM 42U 1100mm Enterprise V2 Dynamic Rack 1
6012 DPI Single-phase 30A/208V C13 Enterprise PDU (US) 6

5.9 BOM for Flex System chassis


The number of Flex System chassis that are needed for different numbers of users depends on the deployment
model.
Code Description Quantity
8721HC1 IBM Flex System Enterprise Chassis Base Model 1
A0TA IBM Flex System Enterprise Chassis 1
A0UC IBM Flex System Enterprise Chassis 2500W Power Module Standard 2
6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 2
A1PH IBM Flex System Enterprise Chassis 2500W Power Module 4
3803 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 4
A0UA IBM Flex System Enterprise Chassis 80mm Fan Module 4
A0UE IBM Flex System Chassis Management Module 1
3803 3 m Blue Cat5e Cable 2
5053 IBM SFP+ SR Transceiver 2
A1DP 1 m IBM QSFP+ to QSFP+ Cable 2
A1PJ 3 m IBM Passive DAC SFP+ Cable 4
A1NF IBM Flex System Console Breakout Cable 1
5075 BladeCenter Chassis Configuration 4
6756ND0 Rack Installation >1U Component 1
675686H IBM Fabric Manager Manufacturing Instruction 1
Select network connectivity for 10 GbE, 10 GbE FCoE or iSCSI, 8 Gb FC, or 16 Gb FC
A3J6 IBM Flex System Fabric EN4093R 10Gb Scalable Switch 2
A3HH IBM Flex System Fabric CN4093 10Gb Scalable Switch 2
5075 IBM 8 Gb SFP + SW Optical Transceiver 4
5605 Fiber Cable, 5 meter multimode LC-LC 4
A0UD IBM Flex System FC3171 8Gb SAN Switch 2
5075 IBM 8 Gb SFP + SW Optical Transceiver 4
5605 Fiber Cable, 5 meter multimode LC-LC 4
A3DP IBM Flex System FC5022 16Gb SAN Scalable Switch 2
A22R Brocade 16Gb SFP + SW Optical Transceiver 4
5605 Fiber Cable, 5 meter multimode LC-LC 4

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


67 version 1.2
5.10 BOM for OEM storage hardware
This section contains the bill of materials for OEM shared storage.

IBM FlashSystem 840 storage

Table 46 on page 37 shows how much FlashSystem storage is needed for different user counts.
Code Description Quantity
9840-AE1 IBM FlashSystem 840 1
AF11 4TB eMLC Flash Module 12
AF10 2TB eMLC Flash Module 0
AF1B 1 TB eMLC Flash Module 0
AF14 Encryption Enablement Pack 1
Select network connectivity for 10 GbE iSCSI, 10 GbE FCoE, 8 Gb FC, or 16 Gb FC
AF17 iSCSI Host Interface Card 2
AF1D 10 Gb iSCSI 8 Port Host Optics 2
AF15 FC/FCoE Host Interface Card 2
AF1D 10 Gb iSCSI 8 Port Host Optics 2
AF15 FC/FCoE Host Interface Card 2
AF18 8 Gb FC 8 Port Host Optics 2
3701 5 m Fiber Cable (LC-LC) 8
AF15 FC/FCoE Host Interface Card 2
AF19 16 Gb FC 4 Port Host Optics 2
3701 5 m Fiber Cable (LC-LC) 4

5.11 BOM for OEM networking hardware


For more information about the number and type and TOR network switches that are needed for different user
counts, see “Networking” section on page 37.

Lenovo 3873 AR2

Code Description Quantity


3873HC2 Brocade 6505 FC SAN Switch 1
00MT457 Brocade 6505 12 Port Software License Pack 1
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416 Brocade 8Gb SFP+ Optical Transceiver 24
88Y6393 Brocade 16Gb SFP+ Optical Transceiver 24

Lenovo 3873 BR1

Code Description Quantity


3873HC3 Brocade 6510 FC SAN Switch 1
00MT459 Brocade 6510 12 Port Software License Pack 2
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416 Brocade 8Gb SFP+ Optical Transceiver 48
88Y6393 Brocade 16Gb SFP+ Optical Transceiver 48

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


68 version 1.2
Resources
For more information, see the following resources:
 Lenovo Client Virtualization reference architecture
lenovopress.com/tips1275
 Citrix XenDesktop
citrix.com/products/xendesktop
 Citrix XenServer
citrix.com/products/xenserver
 VMware vSphere
vmware.com/products/datacenter-virtualization/vsphere
 Atlantis Computing USX
atlantiscomputing.com/products
 Flex System Interoperability Guide
ibm.com/redbooks/abstracts/redpfsig.html
 IBM System Storage Interoperation Center (SSIC)
ibm.com/systems/support/storage/ssic/interoperability.wss

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


69 version 1.2
Document history
Version 1.0 30 Jan 2015  Conversion to Lenovo format

 Added Lenovo thin clients

 Added XenServer 6.5 and SMB operational model

Version 1.1 5 May 2015  Added performance measurements and recommendations for
the Intel E5-2600 v3 processor family and added BOMs for
Lenovo M5 series servers.

 Added hyper-converged operational model and Atlantis USX.

 Added graphics acceleration performance measurements for


Citrix XenServer 6.5 pass-through and vGPU modes.

Version 1.2 2 July 2015  Added more performance measurements and recommendations
for the Intel E5-2600 v3 processor family to include power
workers, Hyper-V hypervisor, SMB mode, and Atlantis USX
simple hybrid storage acceleration.

 Added hyper-converged and vGPU deployment models.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


70 version 1.2
Trademarks and special notices
© Copyright Lenovo 2015.
References in this document to Lenovo products or services do not imply that Lenovo intends to make them
available in every country.
Lenovo, the Lenovo logo, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and Rescue and Recovery are
trademarks of Lenovo.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used Lenovo
products and the results they may have achieved. Actual environmental costs and performance characteristics
may vary by customer.
Information concerning non-Lenovo products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of such
products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. Lenovo has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims related
to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the
supplier of those products.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice,
and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the
full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive
statement of a commitment to specific levels of performance, function or delivery schedules with respect to any
future products. Such commitments are only made in Lenovo product announcements. The information is
presented here to communicate Lenovo’s current investment and development activities as a good faith effort
to help with our customers' future planning.
Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an individual
user will achieve throughput or performance improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-Lenovo websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this Lenovo product and use of those websites is at your own risk.

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop


71 version 1.2

You might also like