Professional Documents
Culture Documents
using
VMware
No part of this paper may be reproduced or distributed without the express written consent of its
author David A Smith. Copyright 2010
1
Abstract
The following whitepaper is a case study of how a long-term healthcare company moved
from a hosted IT solution to an in-house self-managed solution using the latest in virtual
computing technology. Guardian Healthcare Services is a long-term healthcare provider.
Their IT infrastructure was hosted at a remote Application Service Provider off site and out
of state. The project was initiated based on several business drivers including cost
savings, more autonomy over systems and the need for agility in today’s healthcare
market.
The project case study presented here is how Guardian Healthcare Services IT Team took
fourteen remote nursing home facilities covering three states and brought the entire
infrastructure from their hosting provider to their own in-house solution. The challenges
were great and the outcome was excellent. The technologies used in the project were
provided by leaders in virtualization, server and storage, and software – VMware, HP and
Microsoft. The following is the story of Guardian’s migration and datacenter
implementation using VMWare to build a fully virtualized infrastructure.
2
Table of Contents
Abstract 2
Executive Summary 5
Introduction to Virtualization 6
Benefits of Virtualization 7
Case Study: Guardian Healthcare Services 10
Project Planning 14
Infrastructure Plan 15
High Level Migration and Datacenter Planning 16
Server Consolidation 18
HP Hardware and Installation 21
ESX Installation 24
Enterprise Servers 25
2008 Terminal Services 27
Migration 28
Unique Benefits Realized 30
P2V 30
Cloning and Templates 31
Cost Effective Backup 31
Conclusion 32
References 33
3
List of Figures
Figure 1 VM Kernel 9
4
Executive Summary
5
Introduction to Virtualization
Imagine being able to run multiple operating systems on the same set of physical
hardware. Now imagine being able to make efficient use all the physical hardware and
being able to reduce multiple racks to one single rack of servers. There would be no more
wasted RAM or disk space, there would be no more underutilization of expensive
processors and there would be no more wasted space in the datacenter. Hardware costs
are reduced, space requirements are reduced and power and cooling costs are reduced.
Server virtualization includes the ability to abstract the operating system from the
underlying hardware. This provides a new way to look at computing and more importantly
at the datacenter. We’re no longer stuck running one OS per physical server. We can now
run multiple operating systems regardless of the physical box they are sitting on.
Virtualization also provides the ability to run multiple instances of very different operating
systems on the same machine.
There are two common forms of virtualization being used in today’s datacenter, hypervisor
based virtualization and OS level virtualization. The most commonly used and thought of
are the hypervisor virtualization solutions like VMware, Xen, Virtual Iron and Microsoft’s
Hyper-V. In these environments a single or multiple set of physical servers is loaded with
virtualization software. That software provides a layer between the hardware and the guest
operating systems called the hypervisor.
Guests and hosts are terms used to describe the way in which the virtualization
environment operates. The host is the actual physical machine. The host machine can be
any architecture and any make. In the upcoming scenarios Hewlett-Packard’s G385 dual-
quad core servers are used as the host machines in the virtual environment.
The guests are of course the guest operating systems that reside on the hosts. Multiple
guests “live” on the same host machine. It is common to have Windows XP, Windows
Server 2003, Windows Server 2008 and Linux machines all living on the same underlying
physical server. The way these guests are allowed to live on the same box is through the
hypervisor.
Of these, the leader in virtualization is VMware and it’s Virtual Infrastructure hypervisor.
VMware’s hypervisor is the foundation on which the management of underlying physical
hardware occurs. Memory management, CPU scheduling, virtual switch and network data
and access to specific hardware controllers all happen within this layer known as the
VMkernel shown below (VMware, 2009)
6
Figure 1 VMkernel
7
Benefits of Virtualization
Consolidation is the ability to reduce the number of physical servers in the datacenter or the
enterprise. Servers typically run at well below of their total capacity. RAM is often
underutilized, processors are underworked, disk space is not used efficiently and enterprise
applications often depend on different architecture. Virtualization provides the way to
consolidate those servers and applications onto one set of physical hosts.
Resource Provisioning provides the ability to give the operating systems and application
the required resources to run effectively. Thin provisioning is giving the operating system
what it needs to run effectively and nothing more – the ultimate usage of just the right
amount of resources. Virtualization provides for these and on-the-fly re-allocation of
resources. RAM and processors are added at the click of a button instead of all the steps
and planning tasks required to do so in the physical environment.
Disaster Recovery and Business Continuity are greatly accelerated and enhanced through
virtualization. A typical recovery from a bare-metal disaster can take up to 40 hours.
Consider all the steps that go into the standard recovery model, from acquiring and
configuring the physical hardware to loading and configuring the operating system to
installing and loading the backup data to bringing the system back online. Virtualization
provides cloning and snapshot technology that makes duplication of the complete operating
system with all applications and data a matter of a few clicks of the mouse. That duplicate
image is the backup and the operating system – one only need to power on the virtual
image and you are back in business with virtually no downtime.
Testing environments are quickly brought up and taken down with virtualized test
machines. These include, testing of individual operating systems, patch and release
testing, application testing, cluster and farm testing and load testing. Individual test
8
machines and complete cluster and even domains can be quickly brought online with
virtualized machines and virtual networks.
Migration tasks such as moving physical servers between datacenters and across the
country are simplified with virtualization. Physical to Virtual or P2V conversions are the
fastest and easiest way to migrate physical machines from one location to another.
VMware’s Converter installs a small footprint on the physical host then copies the entire
machine to a virtual file. That flat file is then easily copied to a new location or sent via FTP
across the country to the new datacenter location. One only needs to bring the VM online
and take down the physical machine and ship to the new location.
Hardware Maintenance is made easy with virtualization. VMware’s vMotion provides for
moving guests off their underlying physical hardware to another host seamlessly with no
interruption to service. A production server can be moved off a host while the host
undergoes maintenance or hardware upgrades then migrated back all while powered on
and serving clients.
Centralized Management of the datacenter is the ultimate administration tool and with
virtualization the entire datacenter is managed through one console. Individual servers are
managed through the same console as disaster recovery. Operating system upgrades and
updates are handled through the same console as provisioning RAM and adding
processors. The single pane of glass provided by VMware’s Virtual Center is an
administrator’s dream.
The benefits of virtualized environments are many. As shown above virtualization provides
the enterprise with the tools to be able to quickly and easily complete many tasks that were
once cumbersome and time consuming. As a platform, the virtual environment provides
flexibility and more importantly agility to the datacenter administrator. The following case
study shows how one long-term healthcare implemented VMware’s ESX Servers and
Virtual Infrastructure on Hewlett-Packard Server and Storage hardware to migrate from an
offsite, hosted environment to a completely in-house solution with a robust, scalable and
agile virtual infrastructure.
9
Case Study of Guardian Healthcare Services
Guardian Healthcare Services is a skilled nursing care provider. They own and manage 14
long-term care homes and rehabilitation centers in three states. Guardian also owns and
operates its own pharmacy serving its homes and other third-party homes. The pharmacy
delivers medication to all the nursing homes through a daily carrier service.
Healthcare facilities are responsible for handling huge amounts of data and information
systems and electronic medical records are at the heart of Guardian’s operation. This data
ranges from day-to-day operational information to life and death patient information. The
ability to access patient records quickly is critical to the success of the business and the
safety of the patients.
In order to serve the residents well, information technology and electronic medical records
are at the heart of Guardian’s operations. These systems range from the enterprise
communication systems such as email and portal systems to specific applications such as
electronic charting software, dietary and meal planning software and nursing and daily
patient records systems.
Guardian’s IT infrastructure has historically been hosted at an offsite and out of state
service provider. The hosting provider delivered multiple services to Guardian. They
hosted Outlook email and they provided server rack space for many of Guardian’s servers
including their web server, SharePoint server and SQL backend. They also hosted several
critical applications including scheduling software, electronics patient records systems and
business-critical financial applications.
As Guardian grew however, the hosted solution began to show limitations. Guardian’s
ownership demands agility in the market. They want the ability to quickly move on
opportunities and be able to make business driven decisions quickly and independent of
technology resources.
10
Previous Status
As of January 1 of 2009 the infrastructure of Guardian was a mostly hosted solution with a
few systems in-house. Email and communications systems were handled by the hosting
provider as well as financial applications, facility and user files, print services and access to
the systems through Citrix clients. Facilities use a combination of thin clients and PCs to
connect to the network. The Citrix desktop connections and then connections to the
systems were all maintained by the offsite hosting provider with T-1 connections to each of
the facilities and a dedicated t-1 to the corporate offices in Nashville.
Under this topology Guardian maintained their own AD with a Window’s 2003 Server
Domain Controller and a local file and print server for business and management
operations at corporate headquarters. All other systems were hosted with the exception of
the pharmacy servers at Tupelo.
Users at remote facilities would authenticate to the domain, connect to Citrix and then
access whatever application were loaded on their Citrix desktops. Nurse users typically
had only one or two critical application on their desktops and full office suite users had
nursing applications, financial applications, and other business software such as Microsoft
Office and Outlook email.
11
Helpdesk and user connectivity issues were all handled by the hosting provider without
much exception. This solution works well in small environments with relatively few
problems. However Citrix in addition to being somewhat costly was also plagued with
printer issues and hung sessions. User access was also granted through the various
channels and finally through the hosting provider.
In order to add facilities to the Guardian network the project plan went through the hosting
provider. The hosting provider would determine what it needed to bring on a new site, the
hardware that would be required, the network and connectivity that it needed to support the
site and any other additional requirements. They would schedule and supply the outside
parties to complete the work and then charge Guardian for these services.
Additional hosting followed the same procedures. The hosting provider would either
provision their servers for Guardian or Guardian would use their servers and ship or house
them at the hosting provider site, paying a fee for rent, rack space and administration tasks.
Some applications such as the financial application were hosted. The software vendor
provided service to many companies at the same hosting provider location. For example,
SQL named instances were used, one for each hosted company, all on a single or
clustered set of servers located at the hosting provider. The hosting provider owned the
SQL server while Guardian owned its database and data.
This arrangement provides a great solution for companies who do not wish to handle their
own infrastructure. It eliminates the need for in-house IT staff and in-house helpdesk. It
also removed the end user and client from maintenance and upgrades. On the other hand it
also creates a second layer of complexity that may or may not outweigh its benefits.
12
Desired State
As companies grow and need access to their data quickly, the hosted solution may not
perform as well as expected. It is much easier to pull data from your own servers than it is
to request and then access the data at the host site. It is also much easier to reset a
session, add servers or applications and even add a user to SharePoint or Exchange
mailboxes if those systems are under the control of a single in house entity.
With the benefits of a self-hosted solution in hand, Guardian decided to move away from
the hosted solution to its own. Their new solution would provide a way to be agile, to be
creative, to mine data and gather important operational information quickly and to present it
effectively. It would provide a way to maintain quality long-term care and provide new
insights to improving processes and systems.
The foundation for the new systems would rely on virtualization from VMware, servers and
backend storage from Hewlett-Packard, business and office applications from Microsoft
and specialized healthcare software from several well-known healthcare related software
vendors. This is the story of the migration and how these products provided a solid, robust
and manageable virtual infrastructure for Guardian Healthcare Services.
13
Project Planning
The migration project started with a simple plan to migrate the servers and services from
our hosting provider to our corporate offices. At first Guardian painted in very broad
strokes and later worked out tasks in finer detail. The migration project plan followed the
basic tenets of good project planning, Guardian developed initial ideas, developed a plan,
designed deliverables, implemented the plan and then closed and evaluated the final
outcome. Kathy Schwalbe (2007) in her book on project planning noted the some
outcomes and advantages of a good project plan including:
During the migration process Guardian found all of these to be true of its project plan. With
a plan in hand it was easier to gain buy in from stakeholders and participants.
Nurses and support staff are Guardian IT’s customers and by giving them a working plan
with timelines and milestones the project was fully supported. There was better internal
coordination among IT, corporate and operations staff. The plan helped them to meet
goals and deliverable dates. The plan also contributed to work morale as a blueprint for
success.
Project Goals
The project plan set out a typical timeline using Microsoft Project 2007 which helped greatly
in looking at the big picture – then breaking it down into smaller tasks. The larger task was
to migrate an entire company of over 1000 users, 14 nursing homes, thousands of
residents, its servers, its systems and its data. The following tasks had to be done
accomplished to fully migrate:
Given these requirements it was easy to break them down into individual tasks and assign
resources accordingly.
14
Infrastructure Planning
One of the benefits of a self hosted solution is the ability to control and lead. An
enterprises future is largely held back or empowered based its choice of technologies.
While most organizations have a mixed environment the choice of Guardian has always
been systems integration. Typically each system is its own environment and platforms are
chosen based on unique requirements of the application and software. For this project a
more global approach was taken to find the most cost effective and easily implemented
solution that provides for multiple platforms and requirements.
Migration Considerations
Guardian used the service provider to host several key pieces of equipment. These include
server systems and networking equipment. The servers included a Sharepoint farm of 3
servers, Sharepoint, sql and web, a medical pharmacy server which supplies pharmacy
information to all the remote facilities and two application specific mysql servers that are
used for electric patient charting.
Application Considerations
Also under consideration were application specific requirements. The software used at the
host company was a mix of everything from Citrix to Microsoft and from x86 dependant
software to x64 bit and from Microsoft SQL to Mysql servers.
User Configurations
Administrative office users accessed Microsoft Office Suite including Word, Excel and
Outlook through the desktop. They also accessed more specific applications such as
financial package, electronic charting software, dietary and meal planning software, and
scheduling software.
Nurse users on the nursing home floors accessed the network via wireless thin clients.
These thin clients were mounted to rolling medical carts that are installed in each hallway.
The nurse user typically accessed ECS software for patient charting and medication
passes.
15
Remote Connectivity Considerations
The service provider was using Citrix desktops as the remote access to internal servers
and services. Guardian owned the Citrix licensing and the hosting provider hosted the
licenses and the Guardian desktops. This provided remote connectivity for 1000 users and
to 350 concurrent users.
With these elements in hand Guardian planned the direction of their future. It was easy to
see what they had and why it did or in many cases did not work optimally to what they
wanted and where they wanted to be. The goal was a complete, centrally managed
platform that provided high-availability for critical patient systems and the ability to grow as
Guardian grew to integrate data and applications seamlessly.
Migration Path
o Determine owned/hosted servers – meeting with hosting provider to discuss
which systems are owned and which are hosted
o Determine owned/hosted network equipment - meeting with hosting provider
to discuss which systems are owned and which are hosted
o Determine owned software licensing – meeting with hosting provider to
determine which licenses Guardian owns, leases and rents
o Determine amount of hosted data – meeting with hosting provider to
determine the amount of user data, facility shared data, globally shared data
and application data
o Determine transfer of licensing – meeting with hosting providers and vendors
to determine the account numbers and licensing transfer methods wit
o Determine server migration – meeting with hosting provider to determine the
schedule for migration of Sharepoint farm and pharmacy servers
o Determine network equipment migration – meeting with hosting provider to
determine the schedule for migrating routers
o Determine data transfer method – meeting with hosting provider to determine
the method of data transfer and setup of external FTP site
o Determine user migration - meeting with hosting provider to determine the
scheduling and management of user migration from Minnesota to Tennessee.
Platform Requirements
o Determine number of expected users – inventory Active Directory for active
user count for current number users to be used for licensing requirements
and load balancing
o Determine number of expected services – inventory with users and with
hosting provider on the number of services provided including helpdesk, user
adds and miscellaneous
16
o Determine number of expected applications – inventory with service provider
on the number of installed applications on all hosted and owned servers
o Determine application dependencies – work directly with individual vendors to
determine the software requirements of each application listing
dependencies, ports and configurations
o Determine number of expected servers – count number of servers at hosting
provider, discuss load limits and consult application list for consolidation
possibilities
o Determine network requirements –discuss current networks with hosting
provider and network service provider, increase bandwidth to 3 T-1s to each
site.
o Determine rack/space requirements – based on server requirements and
number of VM hosts required
o Determine power requirements – based on number of host servers, storage
arrays, switches
o Determine cooling requirements – based on racked servers and space
Core Requirements
Users
1000+ Users
350 Concurrent Users
150 Exchange Users
100 Office Users
Servers
10 Terminal Servers
1 Exchange Server
1 Sharepoint Server
1 Web Server
1 Mobile Information Server
17
1 File/Print Server
2 Domain Controllers
1 SQL Server
2 MySQL Servers
1 Great Plains Server
1 VI Server
Total Approximately 20 physical servers
The requirements gathering tasks identified over 1000 users, 350 of which are concurrent
users and 12 critical applications required for daily business activity as well as nursing and
resident care. The applications oftentimes require their own SQL or Mysql server. Added
also is the need for various enterprise servers including domain servers, terminal servers,
communications servers and file and print servers to support Guardian’s Active Directory
domain.
18
Server Consolidation
Server consolidation is one of the key benefits of virtualization. It helps alleviate server
sprawl and compacts the datacenter. Virtualization using VMware provides several ways to
consolidate servers. One of the most basic ideas to consolidation is the ability to share a
common resource. In most datacenters today, there is the tendency to over purchase disk
space. Disks storage is getting cheaper and larger disks are being installed to provide for
growth. Growth that often never reaches the potential of the disk. It’s also been estimated
that servers use only 10-15% of their total processor and memory capacity. Given the over
purchasing of disk space and the over provisioned machines and the datacenter is left with
machines that are underutilized, taking up space and capital.
VMware’s Server II is a fast, easy way to reclaim the underused resources and put them to
good use hosting other servers. The idea is simple, install the virtual server onto the
existing operating system and it acts like any other application. The application accesses
that extra empty disk space and those unused resources and presents them for use as a
host. The host is used to install virtual machines of almost any operating system up to the
size of the existing disk space. By installing Virtual Server II on an XP machine with an
extra 40Gb of disk space a platform is available for the install any operating system from
Linux to Windows Server 2008. The guest operating system runs independently of the
underlying operating system using a virtual network adapter and its own IP address.
19
Consolidation is one of the key reasons that Guardian chose to virtualize its environment.
The number of servers required to build their infrastructure was the determining factor.
While some of the servers could be installed as additional guests on physical servers, the
terminal servers and enterprise level servers needed a more solid foundation.
This is where VMware’s ESX Servers and Virtual Infrastructure Client software provided the
ultimate solution for server and datacenter consolidation. Guardian required around 20
servers total. The task of provisioning 20 servers, rack, power, cooling and other related
tasks was not only labor intensive but costly. The virtual platform was and is the best
solution to building the datacenter because of its inherent abilities to dynamically share
resources and storage. ESX Server is the enterprise solution and uses a shared storage or
SAN as the storage for the virtual guests and utilizes the front end server’s memory and
processor power for the guest machines.
20
Guardian researched many server and storage options for their virtual datacenter. To
support 20 or more virtual servers Guardian needed solid dependable hosts and an easy to
manage storage solution that could grow as necessary. These requirements led Guardian
to a clear winner in both cost and quality – the HP DL385 G5 servers with Dual Quad-Core
Opteron processors, MSA 2000 series storage devices and HP ProCurve switches for
connectivity.
Figure 7 HP Hardware
21
HP Hardware and Installation
HP DL385r05 G5 Servers
The DL385s are 2U enterprise class servers with integrated Lights Out (iLO2) remote
management and 8 memory slots. As host servers for the VMware ESX software they
were a great choice. Hewlett-Packard and VMware are long-term partners. HP DL385
servers have been leaders time and again in virtualization testing and the DL385
consistently outranks competition Dell and Sun (Hewlett-Packard, 2009). The rack-
mounted and blade system servers are built with virtualization in mind. Guardian’s solution
was a 3 server cluster using the DL385s. At around 2k each it was not only the perfect fit
for the technological requirements but the minimal capital expenses allows even medium
and small businesses to deploy virtualized environments.
The MSA 2000 series storage arrays are enterprise-level 2U shared storage devices. The
MSAs provide superior performance using Ethernet networking and iSCSI connectivity.
Guardian’s choice of iSCSI over Fiber Channel was definitely based on cost but other
factors as well. Fiber channel networks are not only more expensive but they require skill
sets beyond those of typical common network administration. The MSA iSCSI setup and
management is easy and fast and does not require expensive switches or adapters. It runs
directly on top of the current TCP/IP network and integrates perfectly with Gigabit networks.
The storage array is inexpensive and provides for scalability with high performance SAS
drives. Guardian’s configuration uses 6 450 GB 15k SAS drives in each array configured as
RAID 50 for redundancy and performance.
The ProCurve 2810 switch by HP is a low cost, low maintenance switch that is great for
handling the virtual environment. They provide dual personality ports and connectivity for
10/100/1000Base-T or miniGBIC. Guardian’s choice of switch was largely based on the
ProCurve’s Layer-2 switching and VLAN tagging capabilities. The ProCurves work well in
any environment and provide up to 256 separate VLANs. Guardian’s virtual environment
contains 3 VLANs and the ProCurves provide room for future growth. The HP ProCurve
switches handling virtual LAN traffic and provide for virtual network growth and the ability
separate those networks by function, security level or application.
22
Virtualization Platform Installation
Install and Configure Switches – install two switches for use in virtualized
environment as three separate VLANs
Install VMware ESX servers – install ESX server on three HP cluster servers
Install SAN – install HP Smart Array 1.5 TB storage array as RAID 50 and
create LUN for use by virtualized environment
Setup Virtual Infrastructure – install virtual infrastructure server and setup
virtualized environment
The virtual infrastructure is generally a simple system consisting of three major parts. The
environment contains disk storage for the virtual machines and necessary files for creating
machines such as ISO images and clones. It also contains the server hardware layer that
provides processing power and memory for the virtual machines. And finally it contains the
switching network for connectivity between the servers and the storage and connectivity
between the virtual machines and the network at large.
Storage Installation
Install 2 Storage Array
Setup IP Addresses
Once the basics are setup and connectivity has been tested, the next steps are to install
the management server component. The management server also known as the virtual
center is used for all management tasks and is the dashboard for the virtual environment. It
too is a simple to follow install.
vCenter Installation
Install vCenter
Install Licensing
Install VM converter
Install VM Update Manager
Install VI Infrastructure vCenter Manager
23
Figure 8 vCenter Virtual Environment Management
24
Enterprise Servers and Installation
Within Guardian’s environment are several applications that rely on different architecture
and platforms. The beauty of a virtualized environment is that servers x86 servers can be
built on top of underlying x64 bit hosts. Similarly, x64 bit virtual machines can be built on
top of any x86 chip sets that have Virtualization Technology (VT) capabilities. In
Guardian’s environment most applications ran well on x64 bit Windows 2008 Standard
Servers.
Software Installation
Install Microsoft Windows 2003 Servers – install required enterprise servers
into the virtualized environment
Install Microsoft Windows 2008 Servers - install required enterprise servers
into the virtualized environment
Install Microsoft Windows 2008 Terminal Servers - install required enterprise
servers into the virtualized environment
Install Microsoft Windows 2007 Exchange Server - install required enterprise
servers into the virtualized environment
Install Microsoft Windows 2005 SQL Server - install required enterprise
servers into the virtualized environment
Install service packs and updates – install updates and service packs as
necessary to all servers and setup update maintenance plans
Install application software and updates– install applications and required
updates on servers and shared applications on terminal servers
Test installs and test connectivity – use test users to log on, launch and use
applications on each server and each terminal server.
25
GRDHC01FP – x86 Windows Server 2003 Standard
File and Print
File server
Print server
Profile server
GRDHC01ECS2
Electronic Charting Software mySQL server
26
2008 Terminal Servers
The Remote Application Server is a service of 2008 Terminal Services that presents
remote application to the user as if they were locally installed. The Remote Application or
RemoteApp is similar to typical RDP sessions in that it uses the RDP protocol. RemoteApp
however creates a session in which the user only sees a single application instead of a
complete remote desktop. If a special or shared application is required remotely, the
RemoteApp service provides the application session to the user. The application can be
accessed through a specially created RDP icon which can then be emailed or published.
The remote application will also start if a related file extension is accessed. The
RemoteApp also comes with a built in web portal called Terminal Services Web Access.
The web page is used to distribute individual application access and also provides a similar
link that can be used for full RDP access. Guardian successfully used TS Web Access by
publishing the page through SharePoint. Users at remote nursing facilities accessed
Sharepoint and then whatever application needed.
A second feature of 2008 Terminal Services is the Terminal Services Gateway. The
Gateway provides a way for remote users who are not connected to the network to use an
Internet connection and a specially crafter RDP icon to access internal resources. This
application is an outstanding way to provide access to enterprise applications to remote or
roaming users. There is no longer the need to use a VPN connection and then and RDP
session. The Terminal Services Gateway creates the RDP icon with built in credentials.
Those credentials are used to connect directly to and through an external server to the
internal server containing the application. The outside user simply connects to the internet,
double-clicks the Remote Gateway created and the RDP protocol is tunneled over HTTPS
to the internal resources (Microsoft, 2009).
2008 Terminal Services Session Broker is Microsoft’s built in answer to Terminal Services
load balancing. The Session Broker is a service that runs and tracks all active sessions.
Users are reconnected to existing connections and connected to new session depending
27
on the information held by the Session Broker computer. Set up of lad balanced server
farms are greatly simplified with the service. Servers are added into a farm of 2 or more
servers and a single DNS entry identifies the farm. Remote computers are simply pointed
to the DNS entry and the session broker does the rest of the work. If a server is too busy,
the connections are brokered to another less busy server. If the session is a reconnection,
the session broker re-establishes the connection for the client, presenting the user with a
reconnection to their previous session and work. The Session Broker acts like a traffic cop,
directing user’s sessions to less busy servers while maintaining current sessions
seamlessly.
Migration
With the infrastructure servers, applications and terminal servers in place, migration was a
matter of scheduling and completing migration tasks.
Migration
Migrate Sharepoint – used simple and free VMware Converter to create P2V
images of servers and placed on USB and over-nighted
Migrate email – host to create updated PSTs of all corporate mailboxes
hosted on the providers exchange servers, determine cutoff time for email,
have host ship PST mailboxes overnight, load PST mailboxes on new
Exchange 2007 server, point all clients to new mail server.
Migrate data – host to create zip files of each user data, user profiles, facility
shared data and global data and place on USB for shipment
Migrate users – change thin client and PC settings to point to new Terminal
Services farm, log off and then log users on to new domain
28
FTP and USB Transfer
VMware Server II
Create Datastore
Load VMDK files
Create VMs of SP and SQL
Migrate Users
Point thin clients to VM term server farm
Log on the virtualized domain
Go Live
Set thin clients and PCs to point to new domain
Go Live with all client services
29
Unique Benefits Realized
Most of the migration and datacenter setup was very straightforward. The path was simple,
build up the infrastructure and shortly thereafter migrate the servers, services, users and
data from the third party hosting provider to Guardian as the hosting provider. The
intricacies of migration and datacenter virtualization however, provided some of the most
interesting challenges for the Guardian team. VMware and its unique properties saved the
day on more than one occasion.
Physical to virtual conversions or P2Vs were used with great success. Physical to virtual
conversions are the act of creating a virtual image or file out of an existing server. In
simple terms the converter repackages the operating system into a bootable file. That file
contains the operating system, the data files and the configuration that make it a “virtual
machine”. Guardian used P2V conversions for many tasks before during and after the
migration project. Before the migration P2Vs were used to take images of currently running
machines. The converter program was installed on machines located at the hosting
provider. The converter ran and created a locally stored file – the virtual image. Guardian
then had the hosting provider ship the file overnight on a USB drive to corporate. VMware
Server II was already installed and the virtual images were loaded. Complete servers were
brought online in minutes without requiring the physical machine and the cumbersome
process of un-racking, packing shipping and re-racking to eat into valuable production time.
Servers were moved instantly by simply shutting down the physical server at the remote
site while the identical virtual server was powered on at the new site.
Cost-effective Backups
Virtualization makes data more portable and not only the data but the entire server can be
stored on a external hard drive devices such as a thumb drive. Special care must be taken
to ensure that the datacenter is regularly backed up on three levels. First, the storage array
should be configured as a RAID for redundancy and failures and the array should be
backed up to a separate storage device. Second, the virtual machine itself must be backed
which includes the associated VMDK files and virtual configuration files. If these files are
not properly handled the virtual machine is inaccessible. Finally the data itself should be
backed up using common backup software such as backup Exec or even ntbackup. This
provides three levels of data protection in case of any bare-metal disaster. With these
three levels of backup the data is safe, the datacenter is safe and the individual servers and
their unique configurations are safe.
30
Cloning and Templates
The creation of exact copies of running operating systems is known as cloning. The
machine is imaged to be an exact duplicate of the original system. The systems are
duplicated exactly through VMware and stored as a clone. The clone image is a mirror of
the original, the files, the license keys, the drivers and even the IP addresses are exactly
duplicated. This is extremely helpful when creating multiple servers with the same
hardware and software requirements. It’s also way to create and store a master image of
the systems for backup.
Guardian successfully used VM clones in its terminal server farm setup. Since all terminal
servers require the same applications for users and the same sets of permissions cloning
was the perfect way to accomplish identical machines. VMware was used to first create the
VM of a Windows 2008 Server. The Terminal Server role was added to that machine and
then the required patches, updates, ip addresses and of course licensing were applied.
The applications needed by remote users were loaded on the terminal server and it was
placed in the terminal server farm. A clone and template were created from this original
server with the exact same applications and settings. Once the new cloned image is
brought up and started it is just a matter of running sysprep on the machine to strip out any
old identification that may conflict such as machine id and ip addressing. Now whenever a
new server is required to serve additional clients, a virtual machine is created from the
master template.
31
Conclusion: A Completely Virtualized Environment
One of the final and most exciting benefits realized through the use of these technologies is
the final outcome. A datacenter built completely on a virtual infrastructure is not only a
positive way to use resources more effectively it is also a showpiece of technology. In the
larger picture of healthcare, long-term healthcare is a special segment that historically has
not been involved with technology. High-tech doctor’s offices and university hospitals are
the segments that typically afford and leverage high-technology in the delivery of their
healthcare services.
Long-term healthcare follows the model of smaller hospitals and patient care facilities in its
use of technology. Guardian benefits from visionary leadership that fully embraces
technology as way to support its users so they can provide superior patient care. Guardian
is extremely happy with the final outcomes of the project from both the cost savings as well
as the advanced and solid foundation they now have. They can continue to grow and
expand their services with their fully virtualized datacenter environment.
32
References
Drinkwater, A (2009). The importance of early role definition. Retrieved May 1, 2009, from Project
Connections Web site: http://blog.projectconnections.com/project_practitioners/2009/03/the-
importance-of-early-role-definition.html
Goldbard, A (2009). The pitfalls of planning. Retrieved May 1,2009, from National Endowments for the Arts
Web site: http://arts.endow.gov/resources/Lessons/GOLDBARD.HTML
Hewlett-Packard, (2008). HP customer case study: Managed Care Company Hits 5 Nines Uptime with HP
Virtualization Solution. Retrieved May 3, 2009, from HP Web site:
http://h20195.www2.hp.com/v2/GetPDF.hosting provider x/4AA2-0006ENW.pdf
Hewlett-Packard , (2009). HP Proliant servers earn best overall virtual performance. Retrieved May 1, 2009,
from Hewlett-Packard Web site:
ftp://ftp.compaq.com/pub/products/servers/benchmarks/ProLiant%20BL495c_DL385_%20VMmark_0
10709.pdf
McAllister, N (2009). Server virtualization under the hood. Retrieved May 1, 2009, from Info World Web site:
http://www.infoworld.com/d/virtualization/server-virtualization-under-hood-147
McCain, C (2008). Mastering VMware Infrastructure 3. Indianapolis, IN: Wiley Publishing Inc.
Ontrack Systems, (2008, September 16). Terminal Services 2008 Design Document.
Robert Law Group and the ITAA, (2004, April 9). HIPAA and its Legal Implications for Health Care Information
Technology Solution Providers. Retrieved June 5, 2009, from Information Technology Association of
America Web site: http://www.itaa.org/isec/docs/hippawhitepaper.pdf
Schwalbe, K (2007). Information Technology Project Management. Boston, MA: Thomson Learning Inc..
Singh, A (2004, January). An Introduction to Virtualization. Retrieved May 1, 2009, from Kernel Thread Web
site: http://www.kernelthread.com/publications/virtualization/
Stuart, A (2006, June). Virtual technology, real benefits. Retrieved May 1, 2009, from HP Web site:
http://www.hpl.hp.com/news/2006/apr-jun/virtualization.html
VMware, (2009). VMWare ESX 3.5 and Virtual Center 2.5. Palo Alto, CA: VMware Inc..
Zylowski, R (2008). Deploying Microsoft Exchange in VMware Infrastructure. Retrieved May 23, 2009, from
VMware Web site: http://www.vmware.com/pdf/exchange_best_practices.pdf
33