You are on page 1of 27

The VMware View 4.

5 Floating Reference Architecture


REFERENCE ARCHITECTURE

The VMware View 4.5 Floating Reference Architecture

Table of Contents Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Hybrid Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Design Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Reference Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 The Building Block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Virtual Infrastructure Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Physical Component Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Compute Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Shared Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Access Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Physical Network Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 VMware View Pod Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 VMware View Building Block Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Session Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Desktop Pool Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Active Directory Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Detailed VMware View Pool Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Virtual Machine Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Validation Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Client Access Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Workload Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 CPU Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Memory Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Application Response Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Storage Subsystem Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Network Subsystem Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About the Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

REFERENCE ARCHITECTURE / 2

The VMware View 4.5 Floating Reference Architecture

Overview
Increasingly, organizations are turning to virtual desktop technologies to address the operational and strategic issues related to traditional corporate desktop environments. VMware View 4.5 not only provides a virtual desktop environment that is secure, cost effective, and easy to deploy; but also provides comprehensive design flexibility. VMware View 4.5 modernizes the desktop experience to deliver on-demand private cloud capabilities to users for a consistent experience across the global enterprise. This document provides a reference architecture for floating virtual desktops with VMware View 4.5. Full descriptions of the test environment as well as performance metrics captured during test validation are provided. This reference architecture provides a proven and tested floating architecture for enterprise desktop deployments. The goal was to design and validate a standardized building block, consisting of components capable of supporting 1,000 virtual desktops. The overall design also includes the infrastructure components necessary to integrate ten building blocks to support a 10,000-user VMware View pod that can be managed as a single entity. The architecture uses common components and a standardized design to reduce the overall cost of implementation and management. All the infrastructure components used to validate the reference architecture are interchangeable, which allows components from an organizations vendor of choice to be incorporated and utilize unique features that enhance the value of the overall solution.

Hybrid Design
In the previous Stateless Reference Architecture White Paper, a design based around a single host with unmatched scalability was introduced. The fresh design was enabled by a highly flexible design methodology where the individual needs of the desktop were treated as separate layers. Design flexibility allows for a floating desktop regardless of back-end storage medium, including both local solidstate drives and shared storage. A blended design is possible using both solid-state drives within the host in addition to shared storage. This allows use of both shared storage and the local SSD depending on the pool use case. Individual desktop types could even use local host cache in very specific ways if application needs required. The primary purpose of a hybrid design is to allow an environment to start with extremely low up- front capital cost, but still allow the architecture to easily take advantage of shared storage as required for certain desktop pool types. The architecture focuses on enterprise desktop environments and targets larger deployments commonly found in campus environments. However, a floating design provides a high degree of scalability from only a few hosts to hundreds of hosts because of the assembly of the desktop at the last minute for the end user. This allows even small environment to utilize the same design, yet allow for that design to scale to tens of thousands of users. This whitepaper, based around the same flexible design methodology, is designed to be an extension of previous work. Please view the prior document for VMware View 4.5 component descriptions and VMware View reference architecture methodology. While the prior architecture used local solid-state disk, the design principles in this reference architecture are the same.

REFERENCE ARCHITECTURE / 3

The VMware View 4.5 Floating Reference Architecture

Design Flexibility
When building a highly scalable, flexible, and cost-effective architecture, it is important to view each area of a virtual desktop separately. User data, applications, and desktop operating systems must be thought of as dynamic, flexible layers. Each layer must be independent of another layer if the environment is to have the highest level of scalability and cost effectiveness.

Individualized data is extremely important to be separated for the cost effectiveness of virtualized desktops, as it is the lynchpin to allow floating desktop environments. User information can be handled using several primary methods: Redirect key folders to network-based file shares, and use roaming profiles Use profile management software, such as RTO Software or Liquidware Labs Profile Unity Configure VMware user data disks, attached at the virtualization layer. Regardless of the user data handling, the important aspect to remember is the actual desktop should be viewed as disposable. Floating architecture is only able to obtain low cost and high flexibility when user data is properly separated. While roaming profiles have had a perception of being problematic in the past, with proper design they can be stable and successfully leveraged on a very large scale. A key to successful roaming profiles is keeping the size of the profiles as small as possible. Using extensive folder redirection, especially beyond the defaults that are found in standard group policy objects, the roaming profiles can be stripped down to a bare minimum of files. Examples of recommended folder redirections: Application Data Documents Media (include both pictures and music) Desktop Favorites

REFERENCE ARCHITECTURE / 4

The VMware View 4.5 Floating Reference Architecture

Locking down the desktop, specifically not allowing the user to save data locally is especially important with floating desktops as the data on the desktop could be eliminated entirely when a user logs out of the system. While a persistent design is not required, it is still considered a best practice to allow for addition flexibility and cost savings. Application flexibility can take on many forms, which include the applications in the base desktop image. As mentioned in the previous stateless whitepaper, if the application was abstracted from the base image additional, use cases are easily provided by the same base image. A single pool could be deployed within an environment with multiple user roles. The abstraction of the application would allow each user to leverage the same base OS but still maintain substantial differences in the application set.

REFERENCE ARCHITECTURE / 5

The VMware View 4.5 Floating Reference Architecture

Reference Architecture Design


For this reference architecture a building block-based floating desktop solution capable of supporting at least 10,000 virtual desktops was designed. The solution includes the typical components necessary to integrate 10,000 users into a VMware View Pod, which can be managed as a single unit, leveraging a 1,000-user building block as the single replicated entity. Because VMware View 4.5 enables the centralization of the virtual desktop environment in a highly modular fashion, a high degree of design flexibility is possible in the design. This reference architecture leverages VMware vSphere 4.1 and VMware View 4.5, in addition to network, compute, and storage. Overall, these components optimize performance, scalability, and design flexibility while reducing costs. This architecture uses common components and a standardized design across the building blocks to reduce the overall cost of implementing and managing the solution. All the physical infrastructure components used to validate this reference architecture are interchangeable. This means that an organization can use a vendor of choice for any physical component of the architecture, including the network, storage, and server components. Various vendors might offer or include unique features that further enhance the value of the overall VMware View offering for a particular enterprise. The VMware View 4.5 floating 10,000-user pod is show below.

1,000 Users

VMware ESX Cluster Eight VMware vSphere Hosts

VMware View Building Block

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

VMware View Building Block

vCenter Server
VMware View Manager

vCenter Server

vCenter Server

vCenter Server

vCenter Server
VMware View Manager

VMware View Manager

VMware View Manager Switched Network

VMware View Manager Switched Network

VMware View Manager

Load Balancing Network Core

Figure 1: VMware View 10,000-User Pod

REFERENCE ARCHITECTURE / 6

The VMware View 4.5 Floating Reference Architecture

The Building Block


The overall pod contains 10 building blocks of 1,000 users each, which is the focus of this reference architecture. Each building block contains eight physical servers configured in a single cluster. Centralized shared storage is used for the virtualization desktops, applications, and user profile data, as well as the core infrastructure server data. The blade server chassis used for this validation were each capable of supporting 16 blade servers at full capacity; however, any type of server with the same hardware specification can also be used. One consideration when using blades is the port consolidation of each blade server-to-uplink module. Port consolidation is a common factor when leveraging blade servers. Because of the ability to leverage 10Gbit Ethernet, extensive bandwidth was available to the cluster tested, and would easily scale to accommodate 16 blades within the chassis. A separate set of servers were used to host the common infrastructure components needed for an enterprise desktop environment, such as Microsoft Active Directory, DNS, DHCP, and the VMware View Connection Servers. The VMware vCenter servers also ran within this cluster. Each desktop infrastructure service was implemented as a virtual machine running Windows 2008 Server R2. The infrastructure blade chassis also hosted the simulated client devices. Because of the minimal load created by the core infrastructure in comparison to the overall infrastructure, the actual performance of the core infrastructure was left out of this document for clarity. Each building block shared items such as the physical networking layer, as did the core infrastructure environment. In a smaller environment, it would be possible to have both local and shared storage within the same cluster and provide the floating desktops and the support infrastructure from the same building block.

Figure 2: VMware View 4.5 Floating Cluster

REFERENCE ARCHITECTURE / 7

The VMware View 4.5 Floating Reference Architecture

Virtual Infrastructure Configuration


The virtual infrastructure at the core of each building block is comprised of physical servers and VMware Virtual Infrastructure 4 (VI4). The overall virtual infrastructure is designed to support 10,000 desktop users across 80 hosts. Depending on the environment and desktop workload, these results may vary. The overall VMware View 4.5 floating building block is designed to be supported by a single VMware vCenter 4.1 server. The VMware vCenter server database was hosted on a single Microsoft SQL 2008 server. Both VMware vCenter and the Microsoft SQL server were implemented as virtual machines. The vSphere hosts running these infrastructure virtual machines were part of a standard VMware vSphere High Availability (HA) cluster to protect them from any physical server failures. This is a common approach for hosting desktop infrastructure services that helps provide the highest level of availability. Each VMware View building block contains one VMware vSphere ESXi 4.1.0 cluster of eight physical servers each. The 8-node cluster is designed to host 1,100 virtual desktops with N+1 redundancy. The cluster was configured with HA and vMotion enabled and configured with default requirements. Although the overall pod is designed with a dedicated VMware vCenter server that includes its own dedicated database server, it might be desirable to consolidate the database servers into only one or a few larger database servers. However, database consolidation is not covered as part of this validation effort.

REFERENCE ARCHITECTURE / 8

The VMware View 4.5 Floating Reference Architecture

Physical Component Configuration


Compute Configurations High performing commodity-compute capabilities were chosen to maximize the per-host value of the reference architecture. Particular when testing with Windows 7, ample CPU is necessary to provide a standard user use case a cost effective yet low latency desktop consolidation.
VM war e Vi ew D es ktop 1 ,000-U ser Cluster

QTY 8 2 18 2 2
Infrastructure Servers

Description Compute Servers vSphere ESXi 4.1.0 Build 260247 Six Core 2.93Ghz Intel Xeon X5670 Processors 144GB RAM Total per host, comprised of Kingston 8GB DDR3-1333, Part #KTHPL313/8G 73GB SAS drives, 10k RPM, configured as RAID 1 Broadcom Corporation NetXtreme II 57711E 10Gbit Ethernet Description Compute Servers vSphere ESXi 4.1.0 Build 260247 2 Infrastructure Services (AD, SQL, and so on) 6 Load testing hosts Quad Core Intel X5570 2.93 GHz Processors 48GB RAM, Kingston 4GB DDR3-1333 RAM, Part #KCSB200A/4G 73GB SAS Drives, not used (ESXi boot from SAN) Broadcom Corporation NetXtreme II 57711E 10Gbit Ethernet VMware vCenter Virtual Machine Windows 2008 Server R2 2 vCPU 8GB RAM 30GB Virtual Disk Microsoft SQL 2008 Server Windows 2008 Server R2 2 vCPU 8GB - RAM 40GB Virtual Disk Windows 2008 Server R2 1 vCPU 4GB RAM 20GB Virtual Disk Active Directory DNS/DHCP

QTY 8

2 12 2 2 1

REFERENCE ARCHITECTURE / 9

The VMware View 4.5 Floating Reference Architecture

Shared Storage Configuration


VM war e Vi ew D es ktop 1 ,000-U ser Cluster

QTY / Version 1 1 2 2 4 2 Release 3.0 28

Description NetApp 3140 OnTap Release 8.0 7-mode 10G Ethernet 2.2 GHz Pentium IV CPUs 2x32-bit (Single Core) 1,024 Double Data Rate RAM (400 MHz DIMMS) 10/100/1,100 BaseT Ethernet ports BIOS (3) 300GB 15,000rpm disks (used for user data, application streaming, and so on) (25) 300GB 15,000rpm disks (used for the desktop linked clones and related data)

Access Infrastructure The physical networking was implemented with a redundant network core of a full 10Gbit Ethernet Module. The network core also load balances incoming requests across VMware View Connection Servers, where user requests were routed to the appropriate building block for each virtual desktop session. Realistically, it would be possible to provide networking at 10Gbit speeds for 80 hosts (in blade chassis) with a pair of switches. The primary purpose of load balancing in VMware View architecture is to optimize performance by distributing desktop sessions evenly across all available VMware View Connection Servers. It also improves serviceability and availability by directing requests away from servers that are unavailable, and improves scalability by distributing connection requests automatically to new resources as they are added to the VMware View environment. Secondarily, load balancing provides a critical protection from log on storms, specifically as it relates to spikes in users attempting to be authenticated to the system. This issue is easily mitigated with replica connection servers and proper load balancing. Support for a redundancy and failover mechanism, typically at the network level, prevents the load balancer from becoming a single point of failure. For example, Virtual Router Redundancy Protocol (VRRP) communicates with the load balancer to add redundancy and failover capability. If the main load balancer fails, another load balancer in the group automatically starts handling connections. To provide fault tolerance, a load balancing solution must be able to remove failed VMware View server clusters from the load balancing group. How failed clusters are detected may vary, but regardless of the method used to remove or blacklist an unresponsive server, the solution must ensure that new, incoming sessions are not directed to the unresponsive server. If a VMware View server fails or becomes unresponsive during an active session, users do not lose data. Instead, desktop states are preserved in the virtual desktop so that, when users reconnect to a different VMware View connection server in the group, their desktop sessions resume from where they were when the failure occurred.

REFERENCE ARCHITECTURE / 10

The VMware View 4.5 Floating Reference Architecture

Physical Network Details


Pod Core Ne twork i ng Compon en ts

QTY 2 8 2 2
VLAN Confi gurati on

Description Modular Core Networking 10 Gbit Ethernet Modules Load Balancing Modules 10Gbit Network Switches Description VMware View Desktops Infrastructure 802.11q Tagged Infrastructure Servers 802.11q Tagged Management Needs 802.11q Tagged Storage iSCSI and NFS 802.11q Tagged vMotion 802.11q Tagged

Bui ldi ng Bloc k Network Compon en t (techn ica lly shared across bu ildin g bloc ks)

VLAN ID 10-16 17 20 23 24

VMware View Connection Servers Two VMware View Connection Servers were implemented with load balancing to provide redundancy for the building block for this test. For a 10,000-user Pod, at least six VMware View Connection Servers should be deployed to support added capacity and provide the highest level of performance and redundancy. Per VMware best practices, the VMware View Connection Servers were configured as a replica group, with the load balancer in front of the broker providing client network traffic routing. This allows a View Connection Server to fail and have zero impact to the end user base. VMware View Pod Detail
Pod Core Ne twork i ng Compon en ts

QTY 5+1 10 2 5 1 1

Description VMware View Connection Server Connection Servers VMware View Building Block Clusters VMware View Pod Network Core Components VMware vCenter Server 4.1.0 View Composer Installed Microsoft 2008 SQL Server Shared Storage Component

REFERENCE ARCHITECTURE / 11

The VMware View 4.5 Floating Reference Architecture

VMware View Building Block Detail


1 ,000- Us er VM war e Vi ew Cluster

QTY 1 8

Description VMware vSphere ESXi 4.1.0 Clusters VMware vSphere ESXi 4.1.0 Hosts

Details for each connection server were as follows: VMware View Global Settings (included for clarity)

REFERENCE ARCHITECTURE / 12

The VMware View 4.5 Floating Reference Architecture

View Connection Server Settings

View Connection Server Authentication Settings

REFERENCE ARCHITECTURE / 13

The VMware View 4.5 Floating Reference Architecture

Session Management
Desktop Pools were configured to match a floating deployment in a simulated production environment. Because of the type of design, only two pools were needed and thus created per 1,000-user building block. Individual user accounts were created in Active Directory and assigned to specific groups. Each Group was entitled to a VMware View Connection Server desktop pool. Desktop Pool Configurations
Vi ew M anager Pool Configuration s

Users 500 500

Unique ID RA_Testing_1 RA_Testing_2

Desktop Persistence Floating Floating

Image Type Linked Clone Linked Clone

Cluster Assignment RA Testing 1 RA Testing 2

Active Directory Groups


Vi ew M anager Acti ve D i rectory Grou ps

Group Name Entitle_RA_Testing_1 Entitle_RA_Testing_2

Number of Users 500 500

Detailed VMware View Pool Configuration Once a pool was created, the configuration for each was as follows: Pool Settings

REFERENCE ARCHITECTURE / 14

The VMware View 4.5 Floating Reference Architecture

Provisioning Settings

vCenter Settings

REFERENCE ARCHITECTURE / 15

The VMware View 4.5 Floating Reference Architecture

Virtual Machine Image


Each virtual desktop in the reference architect environment was based on a single optimized Microsoft Windows 7 32-bit template. Windows 7 is not designed for a virtual environment, and as such, is expected to use a traditional hard disk. Because of these two facts, it is highly recommended to optimize the Windows 7 desktop. An optimized image will have a drastic decrease in processor, memory, and disk utilization. All testing in this reference architect was performed with an optimized image following the VMware View Windows 7 Optimization Guide in detail. The template configuration:
Vi rtual M ach i ne Confi guration

QTY 1,000 1GB 24GB

Description Windows 7 32-bit Virtual Machines RAM Virtual Disk

Applications contained with the template:


Vi rtual M ach i ne Ap p li cation s

Microsoft Office 2010 Adobe Acrobat Reader 9 Microsoft Internet Explorer 8 Windows Media Player

REFERENCE ARCHITECTURE / 16

The VMware View 4.5 Floating Reference Architecture

Validation Methodology
When validating VMware View reference architecture designs, it is important to simulate a real world environment as closely as possible. For this validation, each component was built and validated for the virtual infrastructure necessary in a 1,000-user cluster using a simulated workload. The networking, load balancing, and core common infrastructure components were also implemented for access infrastructure necessary to support a 10,000-user VMware View Pod.

Figure 3: VMware View Reference Architecture Lab Workflow

Each test was then conducted in two phases. In the first phase, desktop pools were created provisioning the virtual machines to a single server. Each pool was created manually, just as it typically would be created in a normal environment. Testing was performed to find a realistic limit of the single server, which for a 12-core server allowed 143 desktops. The second phase of the validation included session establishment, log on, and execution of a workload across a full cluster. Each client access device established individual VMware View session connections to assigned View Desktops using the VMware View Client. Once a session was established, the workload was run to simulate typical user activity. Each session worked in the environment as a standard user throughout the test, during which time the overall system statistics were collected from several components of the architecture. The following sections explain in more detail how each layer was implemented and used as part of the validation and how the workload was implemented.

REFERENCE ARCHITECTURE / 17

The VMware View 4.5 Floating Reference Architecture

Client Access Devices


To test and validate each layer of the architecture, simulated client access devices were deployed to simulate a real world environment using VMware Desktop Reference Architecture Workload Simulator (RAWC). The simulated users connect from a client access device through the supporting infrastructure and establish network-based sessions with the View Desktops. The RAWC client access devices were implemented separately from the actual building blocks and other infrastructure components, such as Active Directory, DNS, DHCP, and file and print services, to allow for realistic simulation of end users that are outside the datacenter. Each RAWC client access device was implemented using Windows XP SP3 64-bit running the 64-bit VMware View Client. Ten client access devices were deployed, and each was used to establish one hundred unique VMware View sessions. Each session was established using a unique individual user account that was entitled to use the desktop pool. During the tests, virtual clients were logged in at random intervals until all the virtual clients had been logged in. Each simulated clients 100 sessions were started, logging in 10 sessions every 20 seconds. The exact RAWC configuration is show below:

REFERENCE ARCHITECTURE / 18

The VMware View 4.5 Floating Reference Architecture

Workload Description
Each virtual machine was equipped to run a workload that simulates typical user behavior, using an application set commonly found and used across a broad array of desktop environments. The workload has a set of randomly executed functions that perform operations on a variety of applications. Several other factors can be implemented to increase the load or adjust the user behavior, such as the number of words per minute that are typed and the delay between applications being launched. The workload configuration used for this validation included Office 2010, including Word, Excel, PowerPoint, and Internet Explorer 8, Adobe Acrobat Reader, and Windows Media Player. During the execution of the workload, multiple applications were opened at the same time and windows were minimized and maximized as the workload progressed, randomly switching between each application. Individual application operations that were randomly performed included: Microsoft Word 2010 Open/minimize/close, write random words/numbers, save modifications. Microsoft Excel 2010 Open/minimize/close, write random numbers, insert/delete columns/rows, copy/paste formulas, save modifications. Microsoft PowerPoint 2010 Open/minimize/close, conduct a slide show presentation. Adobe Acrobat Reader Open/minimize/close, browse pages in PDF document. Internet Explorer 8 Open/minimize/close, browse page. Windows Media Player Open and View a sample video. Based on the think time and words per minute used for this validation, this workload could be compared to that of a high-end task worker or lower-end knowledge worker. Real results would absolutely depend on the environments actual usage, but in general the results in this reference architecture are designed to be conservative. With this workload, it was validated that at least 1,000 users were able to be maintained by this architecture using the provided server, network, storage resources, and configuration. In addition to being able to sustain 1,000 users with fast application response time, the necessary resources were also available to accommodate a host failure within each cluster, as well as to accommodate a moderate amount of growth or unpredicted increase or change in user workload. Depending on the specific environment, additional changes or features implemented, and the workload characteristics of your users, you may be able to accommodate more or fewer users.

REFERENCE ARCHITECTURE / 19

The VMware View 4.5 Floating Reference Architecture

Validation Results
This section details the results and observations that were concluded during the validation of floating desktop virtualization with VMware View 4.5. All validation included several test iterations to ensure data consistency after a ramp up period, for clarity multiple iterations are not shown. All data includes the initial period of massive user logons ramping up the test to show even when pushed heavily the environment can provide an acceptable range of performance.

CPU Utilization
The graphs below is the per-host processor usage, both for the normal 125 users that would be on a server during a 1,100-user 8-host test (58% average), as well as 143 users, simulating a single host failure in the cluster (67% average). Ample CPU usage is available for additional scalability.

Figure 4: 125-User Host CPU Usage

Figure 5: 143-User Host CPU Usage

REFERENCE ARCHITECTURE / 20

The VMware View 4.5 Floating Reference Architecture

Memory Usage
Because the total number of desktops on each host only consumed available memory, there is limited memory over commitment. Additional users would be able to be supported. As expected, with 143 users some memory page sharing becomes active, however this is still minimal.

Figure 6: 125-User Host Memory Usage

Figure 7: 143-User Host Memory Usage

REFERENCE ARCHITECTURE / 21

The VMware View 4.5 Floating Reference Architecture

Application Response Times


In all tests, application response time was tracked to ensure user experience expectation would be in line with an acceptable environment.

Figure 8: 125 Desktops Application Response Time

Figure 9: 1,100-User Average Application Response Time

REFERENCE ARCHITECTURE / 22

The VMware View 4.5 Floating Reference Architecture

Storage Subsystem Detail


The user count storage detail reveals how important the writes are in a desktop virtualization environment that uses properly optimized images. Once the initial read I/O storm subsides, writes take an important role.

Figure 10: 125 Desktops, Storage I/O Per Second

Figure 11: 143 Desktops, Storage I/O Per Second

REFERENCE ARCHITECTURE / 23

The VMware View 4.5 Floating Reference Architecture

Particularly when viewing the entire 1,000-user cluster I/O usage, the ability to provide 5,000 sustained write IOPS is critical. However, the initial read storm must be planned with proper storage caches in order to provide acceptable performance in all cases.

Figure 12: 1,000 Desktops Storage I/O Per Second

Figure 13: 1,000 Desktops Storage MBytes Per Second

REFERENCE ARCHITECTURE / 24

The VMware View 4.5 Floating Reference Architecture

Network Subsystem Detail

Figure 14: 1,000 Desktops Storage Network MBits Per Second

Figure 15: 1,000 Desktops Virtual Machine Network MBits Per Second

REFERENCE ARCHITECTURE / 25

The VMware View 4.5 Floating Reference Architecture

Conclusion
The VMware View 4.5 floating architecture can be used to provide flexible low cost desktops in local host storage and storage alike. The flexibility in design allows an organization to start at even a single host, and slowly migrate to the other extreme to address different use cases. In this reference architecture, 1,000-user building blocks were shown to provide adequate performance and proper redundancy. The core design principles of separating user data, application, and the operating system are a foundation and a best practice regardless of the underlying technical VMware View design.

About the Authors


World Wide Technology, Inc. (WWT) is a leading Systems Integrator providing technology products, services and supply chain solutions to customers around the globe. WWT understands todays advanced technologies, including great depth in datacenter transformation by leveraging virtualization and cloud technologies. When properly planned, procured and deployed, these business solutions reduce costs, increase profitability and ultimately improve a companys ability to effectively serve their customers. Founded in 1990, WWT has grown from a small start-up business to a world-class organization with over $3 billion in revenue and more than 1,400 highly-trained employees. WWT continues to achieve consistent financial growth and provide our partners with uncommon strength and stability. WWTs proven processes span the technology implementation lifecycle as we provide customers with advanced technology solutions. By engaging WWT to manage their planning, procurement and deployment processes, our customers benefit from our certified technology professionals, nationwide logistics facilities and a suite of eCommerce applications designed to greatly simplify the supply chain.

Acknowledgements
VMware would like to acknowledge the following individuals for their contributions to this paper and help with the test setup, analysis, as well as providing the lab infrastructure and building of the joint solution: World Wide Technology, Inc.: Jason Campagna and David Kinsman VMware, Inc.: Mac Binesh, Fred Schimscheimer, Matt Eccleston, and Mason Uyeda

REFERENCE ARCHITECTURE / 26

The VMware View 4.5 Floating Reference Architecture

References
VMware View http://www.vmware.com/products/view/ VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5 http://www.vmware.com/files/pdf/VMware-View-45-Stateless-RA.pdf VMware vSphere 4.1 http://www.vmware.com/products/vsphere/

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW_11Q1_RA_Floating_EN_PXX

You might also like