Professional Documents
Culture Documents
Table of Contents Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Hybrid Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Design Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Reference Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 The Building Block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Virtual Infrastructure Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Physical Component Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Compute Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Shared Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Access Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Physical Network Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 VMware View Pod Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 VMware View Building Block Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Session Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Desktop Pool Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Active Directory Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Detailed VMware View Pool Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Virtual Machine Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Validation Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Client Access Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Workload Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 CPU Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Memory Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Application Response Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Storage Subsystem Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Network Subsystem Detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About the Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
REFERENCE ARCHITECTURE / 2
Overview
Increasingly, organizations are turning to virtual desktop technologies to address the operational and strategic issues related to traditional corporate desktop environments. VMware View 4.5 not only provides a virtual desktop environment that is secure, cost effective, and easy to deploy; but also provides comprehensive design flexibility. VMware View 4.5 modernizes the desktop experience to deliver on-demand private cloud capabilities to users for a consistent experience across the global enterprise. This document provides a reference architecture for floating virtual desktops with VMware View 4.5. Full descriptions of the test environment as well as performance metrics captured during test validation are provided. This reference architecture provides a proven and tested floating architecture for enterprise desktop deployments. The goal was to design and validate a standardized building block, consisting of components capable of supporting 1,000 virtual desktops. The overall design also includes the infrastructure components necessary to integrate ten building blocks to support a 10,000-user VMware View pod that can be managed as a single entity. The architecture uses common components and a standardized design to reduce the overall cost of implementation and management. All the infrastructure components used to validate the reference architecture are interchangeable, which allows components from an organizations vendor of choice to be incorporated and utilize unique features that enhance the value of the overall solution.
Hybrid Design
In the previous Stateless Reference Architecture White Paper, a design based around a single host with unmatched scalability was introduced. The fresh design was enabled by a highly flexible design methodology where the individual needs of the desktop were treated as separate layers. Design flexibility allows for a floating desktop regardless of back-end storage medium, including both local solidstate drives and shared storage. A blended design is possible using both solid-state drives within the host in addition to shared storage. This allows use of both shared storage and the local SSD depending on the pool use case. Individual desktop types could even use local host cache in very specific ways if application needs required. The primary purpose of a hybrid design is to allow an environment to start with extremely low up- front capital cost, but still allow the architecture to easily take advantage of shared storage as required for certain desktop pool types. The architecture focuses on enterprise desktop environments and targets larger deployments commonly found in campus environments. However, a floating design provides a high degree of scalability from only a few hosts to hundreds of hosts because of the assembly of the desktop at the last minute for the end user. This allows even small environment to utilize the same design, yet allow for that design to scale to tens of thousands of users. This whitepaper, based around the same flexible design methodology, is designed to be an extension of previous work. Please view the prior document for VMware View 4.5 component descriptions and VMware View reference architecture methodology. While the prior architecture used local solid-state disk, the design principles in this reference architecture are the same.
REFERENCE ARCHITECTURE / 3
Design Flexibility
When building a highly scalable, flexible, and cost-effective architecture, it is important to view each area of a virtual desktop separately. User data, applications, and desktop operating systems must be thought of as dynamic, flexible layers. Each layer must be independent of another layer if the environment is to have the highest level of scalability and cost effectiveness.
Individualized data is extremely important to be separated for the cost effectiveness of virtualized desktops, as it is the lynchpin to allow floating desktop environments. User information can be handled using several primary methods: Redirect key folders to network-based file shares, and use roaming profiles Use profile management software, such as RTO Software or Liquidware Labs Profile Unity Configure VMware user data disks, attached at the virtualization layer. Regardless of the user data handling, the important aspect to remember is the actual desktop should be viewed as disposable. Floating architecture is only able to obtain low cost and high flexibility when user data is properly separated. While roaming profiles have had a perception of being problematic in the past, with proper design they can be stable and successfully leveraged on a very large scale. A key to successful roaming profiles is keeping the size of the profiles as small as possible. Using extensive folder redirection, especially beyond the defaults that are found in standard group policy objects, the roaming profiles can be stripped down to a bare minimum of files. Examples of recommended folder redirections: Application Data Documents Media (include both pictures and music) Desktop Favorites
REFERENCE ARCHITECTURE / 4
Locking down the desktop, specifically not allowing the user to save data locally is especially important with floating desktops as the data on the desktop could be eliminated entirely when a user logs out of the system. While a persistent design is not required, it is still considered a best practice to allow for addition flexibility and cost savings. Application flexibility can take on many forms, which include the applications in the base desktop image. As mentioned in the previous stateless whitepaper, if the application was abstracted from the base image additional, use cases are easily provided by the same base image. A single pool could be deployed within an environment with multiple user roles. The abstraction of the application would allow each user to leverage the same base OS but still maintain substantial differences in the application set.
REFERENCE ARCHITECTURE / 5
1,000 Users
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
1,000 Users
VMware ESX Cluster Eight VMware vSphere Hosts
vCenter Server
VMware View Manager
vCenter Server
vCenter Server
vCenter Server
vCenter Server
VMware View Manager
REFERENCE ARCHITECTURE / 6
REFERENCE ARCHITECTURE / 7
REFERENCE ARCHITECTURE / 8
QTY 8 2 18 2 2
Infrastructure Servers
Description Compute Servers vSphere ESXi 4.1.0 Build 260247 Six Core 2.93Ghz Intel Xeon X5670 Processors 144GB RAM Total per host, comprised of Kingston 8GB DDR3-1333, Part #KTHPL313/8G 73GB SAS drives, 10k RPM, configured as RAID 1 Broadcom Corporation NetXtreme II 57711E 10Gbit Ethernet Description Compute Servers vSphere ESXi 4.1.0 Build 260247 2 Infrastructure Services (AD, SQL, and so on) 6 Load testing hosts Quad Core Intel X5570 2.93 GHz Processors 48GB RAM, Kingston 4GB DDR3-1333 RAM, Part #KCSB200A/4G 73GB SAS Drives, not used (ESXi boot from SAN) Broadcom Corporation NetXtreme II 57711E 10Gbit Ethernet VMware vCenter Virtual Machine Windows 2008 Server R2 2 vCPU 8GB RAM 30GB Virtual Disk Microsoft SQL 2008 Server Windows 2008 Server R2 2 vCPU 8GB - RAM 40GB Virtual Disk Windows 2008 Server R2 1 vCPU 4GB RAM 20GB Virtual Disk Active Directory DNS/DHCP
QTY 8
2 12 2 2 1
REFERENCE ARCHITECTURE / 9
Description NetApp 3140 OnTap Release 8.0 7-mode 10G Ethernet 2.2 GHz Pentium IV CPUs 2x32-bit (Single Core) 1,024 Double Data Rate RAM (400 MHz DIMMS) 10/100/1,100 BaseT Ethernet ports BIOS (3) 300GB 15,000rpm disks (used for user data, application streaming, and so on) (25) 300GB 15,000rpm disks (used for the desktop linked clones and related data)
Access Infrastructure The physical networking was implemented with a redundant network core of a full 10Gbit Ethernet Module. The network core also load balances incoming requests across VMware View Connection Servers, where user requests were routed to the appropriate building block for each virtual desktop session. Realistically, it would be possible to provide networking at 10Gbit speeds for 80 hosts (in blade chassis) with a pair of switches. The primary purpose of load balancing in VMware View architecture is to optimize performance by distributing desktop sessions evenly across all available VMware View Connection Servers. It also improves serviceability and availability by directing requests away from servers that are unavailable, and improves scalability by distributing connection requests automatically to new resources as they are added to the VMware View environment. Secondarily, load balancing provides a critical protection from log on storms, specifically as it relates to spikes in users attempting to be authenticated to the system. This issue is easily mitigated with replica connection servers and proper load balancing. Support for a redundancy and failover mechanism, typically at the network level, prevents the load balancer from becoming a single point of failure. For example, Virtual Router Redundancy Protocol (VRRP) communicates with the load balancer to add redundancy and failover capability. If the main load balancer fails, another load balancer in the group automatically starts handling connections. To provide fault tolerance, a load balancing solution must be able to remove failed VMware View server clusters from the load balancing group. How failed clusters are detected may vary, but regardless of the method used to remove or blacklist an unresponsive server, the solution must ensure that new, incoming sessions are not directed to the unresponsive server. If a VMware View server fails or becomes unresponsive during an active session, users do not lose data. Instead, desktop states are preserved in the virtual desktop so that, when users reconnect to a different VMware View connection server in the group, their desktop sessions resume from where they were when the failure occurred.
REFERENCE ARCHITECTURE / 10
QTY 2 8 2 2
VLAN Confi gurati on
Description Modular Core Networking 10 Gbit Ethernet Modules Load Balancing Modules 10Gbit Network Switches Description VMware View Desktops Infrastructure 802.11q Tagged Infrastructure Servers 802.11q Tagged Management Needs 802.11q Tagged Storage iSCSI and NFS 802.11q Tagged vMotion 802.11q Tagged
Bui ldi ng Bloc k Network Compon en t (techn ica lly shared across bu ildin g bloc ks)
VLAN ID 10-16 17 20 23 24
VMware View Connection Servers Two VMware View Connection Servers were implemented with load balancing to provide redundancy for the building block for this test. For a 10,000-user Pod, at least six VMware View Connection Servers should be deployed to support added capacity and provide the highest level of performance and redundancy. Per VMware best practices, the VMware View Connection Servers were configured as a replica group, with the load balancer in front of the broker providing client network traffic routing. This allows a View Connection Server to fail and have zero impact to the end user base. VMware View Pod Detail
Pod Core Ne twork i ng Compon en ts
QTY 5+1 10 2 5 1 1
Description VMware View Connection Server Connection Servers VMware View Building Block Clusters VMware View Pod Network Core Components VMware vCenter Server 4.1.0 View Composer Installed Microsoft 2008 SQL Server Shared Storage Component
REFERENCE ARCHITECTURE / 11
QTY 1 8
Description VMware vSphere ESXi 4.1.0 Clusters VMware vSphere ESXi 4.1.0 Hosts
Details for each connection server were as follows: VMware View Global Settings (included for clarity)
REFERENCE ARCHITECTURE / 12
REFERENCE ARCHITECTURE / 13
Session Management
Desktop Pools were configured to match a floating deployment in a simulated production environment. Because of the type of design, only two pools were needed and thus created per 1,000-user building block. Individual user accounts were created in Active Directory and assigned to specific groups. Each Group was entitled to a VMware View Connection Server desktop pool. Desktop Pool Configurations
Vi ew M anager Pool Configuration s
Detailed VMware View Pool Configuration Once a pool was created, the configuration for each was as follows: Pool Settings
REFERENCE ARCHITECTURE / 14
Provisioning Settings
vCenter Settings
REFERENCE ARCHITECTURE / 15
Microsoft Office 2010 Adobe Acrobat Reader 9 Microsoft Internet Explorer 8 Windows Media Player
REFERENCE ARCHITECTURE / 16
Validation Methodology
When validating VMware View reference architecture designs, it is important to simulate a real world environment as closely as possible. For this validation, each component was built and validated for the virtual infrastructure necessary in a 1,000-user cluster using a simulated workload. The networking, load balancing, and core common infrastructure components were also implemented for access infrastructure necessary to support a 10,000-user VMware View Pod.
Each test was then conducted in two phases. In the first phase, desktop pools were created provisioning the virtual machines to a single server. Each pool was created manually, just as it typically would be created in a normal environment. Testing was performed to find a realistic limit of the single server, which for a 12-core server allowed 143 desktops. The second phase of the validation included session establishment, log on, and execution of a workload across a full cluster. Each client access device established individual VMware View session connections to assigned View Desktops using the VMware View Client. Once a session was established, the workload was run to simulate typical user activity. Each session worked in the environment as a standard user throughout the test, during which time the overall system statistics were collected from several components of the architecture. The following sections explain in more detail how each layer was implemented and used as part of the validation and how the workload was implemented.
REFERENCE ARCHITECTURE / 17
REFERENCE ARCHITECTURE / 18
Workload Description
Each virtual machine was equipped to run a workload that simulates typical user behavior, using an application set commonly found and used across a broad array of desktop environments. The workload has a set of randomly executed functions that perform operations on a variety of applications. Several other factors can be implemented to increase the load or adjust the user behavior, such as the number of words per minute that are typed and the delay between applications being launched. The workload configuration used for this validation included Office 2010, including Word, Excel, PowerPoint, and Internet Explorer 8, Adobe Acrobat Reader, and Windows Media Player. During the execution of the workload, multiple applications were opened at the same time and windows were minimized and maximized as the workload progressed, randomly switching between each application. Individual application operations that were randomly performed included: Microsoft Word 2010 Open/minimize/close, write random words/numbers, save modifications. Microsoft Excel 2010 Open/minimize/close, write random numbers, insert/delete columns/rows, copy/paste formulas, save modifications. Microsoft PowerPoint 2010 Open/minimize/close, conduct a slide show presentation. Adobe Acrobat Reader Open/minimize/close, browse pages in PDF document. Internet Explorer 8 Open/minimize/close, browse page. Windows Media Player Open and View a sample video. Based on the think time and words per minute used for this validation, this workload could be compared to that of a high-end task worker or lower-end knowledge worker. Real results would absolutely depend on the environments actual usage, but in general the results in this reference architecture are designed to be conservative. With this workload, it was validated that at least 1,000 users were able to be maintained by this architecture using the provided server, network, storage resources, and configuration. In addition to being able to sustain 1,000 users with fast application response time, the necessary resources were also available to accommodate a host failure within each cluster, as well as to accommodate a moderate amount of growth or unpredicted increase or change in user workload. Depending on the specific environment, additional changes or features implemented, and the workload characteristics of your users, you may be able to accommodate more or fewer users.
REFERENCE ARCHITECTURE / 19
Validation Results
This section details the results and observations that were concluded during the validation of floating desktop virtualization with VMware View 4.5. All validation included several test iterations to ensure data consistency after a ramp up period, for clarity multiple iterations are not shown. All data includes the initial period of massive user logons ramping up the test to show even when pushed heavily the environment can provide an acceptable range of performance.
CPU Utilization
The graphs below is the per-host processor usage, both for the normal 125 users that would be on a server during a 1,100-user 8-host test (58% average), as well as 143 users, simulating a single host failure in the cluster (67% average). Ample CPU usage is available for additional scalability.
REFERENCE ARCHITECTURE / 20
Memory Usage
Because the total number of desktops on each host only consumed available memory, there is limited memory over commitment. Additional users would be able to be supported. As expected, with 143 users some memory page sharing becomes active, however this is still minimal.
REFERENCE ARCHITECTURE / 21
REFERENCE ARCHITECTURE / 22
REFERENCE ARCHITECTURE / 23
Particularly when viewing the entire 1,000-user cluster I/O usage, the ability to provide 5,000 sustained write IOPS is critical. However, the initial read storm must be planned with proper storage caches in order to provide acceptable performance in all cases.
REFERENCE ARCHITECTURE / 24
Figure 15: 1,000 Desktops Virtual Machine Network MBits Per Second
REFERENCE ARCHITECTURE / 25
Conclusion
The VMware View 4.5 floating architecture can be used to provide flexible low cost desktops in local host storage and storage alike. The flexibility in design allows an organization to start at even a single host, and slowly migrate to the other extreme to address different use cases. In this reference architecture, 1,000-user building blocks were shown to provide adequate performance and proper redundancy. The core design principles of separating user data, application, and the operating system are a foundation and a best practice regardless of the underlying technical VMware View design.
Acknowledgements
VMware would like to acknowledge the following individuals for their contributions to this paper and help with the test setup, analysis, as well as providing the lab infrastructure and building of the joint solution: World Wide Technology, Inc.: Jason Campagna and David Kinsman VMware, Inc.: Mac Binesh, Fred Schimscheimer, Matt Eccleston, and Mason Uyeda
REFERENCE ARCHITECTURE / 26
References
VMware View http://www.vmware.com/products/view/ VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5 http://www.vmware.com/files/pdf/VMware-View-45-Stateless-RA.pdf VMware vSphere 4.1 http://www.vmware.com/products/vsphere/
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW_11Q1_RA_Floating_EN_PXX