Professional Documents
Culture Documents
Rethinking VDI
Rethinking VDI
White Paper
IT priorities such as information security and cost control are often at odds with the preferences and needs of
end users, including PC hardware and software choice, ubiquitous network connectivity, and varying usage
requirements and technical knowledge.
When these factors are combined with ongoing needs to refresh PC hardware and adopt new operating
systems such as Windows 7, IT teams are left looking for any source of relief for escalating operational costs.
But for many organizations, VDI is not a silver bullet. With most VDI approaches, the improved flexibility,
efficiency, and security also come with a number of drawbacks.
Based on success reducing hardware costs through server virtualization, many organizations assume that
moving desktops into the data center will result in lower costs. While there are operational cost savings to be
gained through desktop virtualization, the reality is that moving desktop sessions from commodity PC
hardware to higher-cost servers and data center storage results in increased overall capital expense.
There are certain cases for which the efficiency and security benefits of desktop virtualization justify the data
center investment over a multi-year period. But, with budgets tighter than ever, many organizations need a
more immediate return on investment. They do not have the luxury of making capital investments today for
the uncertain promise of savings years down the road.
network to a corresponding user. This puts additional demands on IT teams to add bandwidth as use of
desktop virtualization increases, consuming both time and budget.
With server-hosted desktops, the network becomes a point of failure for end-user computing sessions. This
creates a new category of risk that does not exist with today’s distributed PC computing architecture.
Unexpected network latency or downtime will take down large numbers of end-users simultaneously,
resulting in unhappy users and increased pressure on IT operations staff.
For example, many end-users expect that a specialized USB peripheral plugged into their PC will
immediately work. The moment that process becomes more complex than plug and play, the prospects for
the success of desktop virtualization are undermined.
Desktop virtualization solutions that only offer usage at fixed locations may have applicability to a subset of
these locations, but lack of ubiquitous network connectivity and frequent need to cross organizational
boundaries prevent most desktop virtualization solutions from achieving wide-scale adoption.
But what if the same benefits could be achieved through a distributed desktop virtualization architecture that
preserves the experience, economics, and flexibility of the traditional PC computing model?
The fact that most desktop virtualization solutions are data center-centric is more vendor-driven than need-
driven. Early desktop virtualization approaches came from existing server virtualization vendors, who
leveraged their core technology by making desktop virtualization an add-on to server virtualization.
If one looks at desktop virtualization as a clean canvas, the data center is not necessarily the right place to
start. The ideal approach is one that delivers the operational and security benefits of virtualization techniques
without degrading the simplicity, economics and user experience of the current distributed PC computing
model.
At the heart of desktop virtualization sits hypervisor technology. As a result of the industry’s server
virtualization heritage, most hypervisors today run on servers. However, the technology exists to extend
hypervisor technology to traditional PC hardware, creating an opportunity to achieve the management and
security benefits of desktop virtualization using existing PCs.
By combining the management attributes of server-based desktop virtualization with a delivery model that
allows end-users to run their virtual desktop on traditional laptop or desktop PCs:
• Data center costs are kept to a minimum, with most computing power remaining at the endpoint
• Network utilization is limited and tightly controlled
• Mobile users can continue to use laptops and roam between online and offline
• Scaling curve is smoother, with little data center impact as new users are added.
While there are two types of client-side hypervisors, only one delivers the benefits IT departments need to
make desktop virtualization a success.
The more common PC virtualization architecture today is a type-2 client hypervisor. A type-2 client hypervisor
is installed as an application within an existing native operating system, enabling virtual machines to run on
top of this existing OS.
Type-2 client hypervisors are easy to deploy and provide the basic capability of running multiple operating
systems on a single PC. However, since these virtual machines are highly dependent on the native host
operating system for key functions, they are limited by performance and security drawbacks.
These drawbacks are overcome by a Type-1 or “bare-metal” client hypervisor. With a Type-1 architecture, the
native operating system is eliminated or isolated into a separate disk partition. The hypervisor installs directly
onto the “bare-metal” PC hardware and executes without a native operating system. Any virtual machines
running on the hypervisor operate without a native operating system and are also completely isolated from
each other.
Following the precedent of server virtualization products, where Type-2 hypervisors eventually have given
way to Type-1 hypervisors as the standard for performance and security, Type-1 hypervisor technology is an
increasingly popular option for client-side virtualization.
• NxTop Engine: “Bare-metal” client hypervisor and proprietary management technology that allows fully
managed virtual desktops to run on intermittently connected PCs.
• NxTop Center: Centralized management console to create, deploy, update and secure virtual desktops
for execution on NxTop Engine, as well as to remotely manage NxTop Engine itself.
While NxTop is designed to complement a server-based computing infrastructure, it has a very small data
center footprint, is not sensitive to network availability and latency, and makes extensive use of existing PC
investments.
Patches and updates are applied on a one-to-many basis. The IT administrator applies patches once to a
master virtual machine running on the NxTop management console. Upon republishing, the changed data
blocks are streamed to NxTop-enabled PCs. A patched image is assembled in the background and
transparently loaded on the next reboot.
Unlike Type-2 client hypervisors running on untrusted operating systems, NxTop virtual machines are
completely isolated from one another. Malware in an unmanaged Windows desktop does not compromise a
managed NxTop virtual machine, even on the same hardware.
NxTop presents a consistent set of virtual hardware to the end-user operating system, simplifying migration
of users to new hardware platforms. Driver management and other hardware-specific compatibility
challenges are eliminated.
Integrated Security
All virtual machine and system data on NxTop-enabled PCs is encrypted, providing peace of mind in the
event that a PC containing sensitive data is lost or stolen.
IT administrators can also protect against data leakage and unauthorized use with NxTop’s robust policy
controls. Access to hardware such as USB ports can be restricted or filtered based on centrally defined
policies at global, group and individual-user levels. Virtual machines can be governed by time-based
expiration policies and on-demand remote disablement.
As an added layer of security, IT administrators can flag lost PCs for remote termination. If a lost or stolen PC
connects to a network, NxTop digitally shreds all data and encryption keys, then self-destructs.
The combination of central virtual-image management, data backup and hardware abstraction dramatically
simplifies re-provisioning users in the event of a lost or failed PC. Simply register a new PC — even a
completely different hardware platform — and with a few mouse clicks the user is running their exact
desktop environment from the last time they connected to a network.
While solutions like NxTop start with a deployment that is closely related to today’s PC-centric computing
approach, you are not locked out from emerging approaches such as server-based computing on browser-
based cloud applications.
One of the core strengths of NxTop’s client hypervisor technology is that it can act as a Swiss army knife of
sorts. In addition to running full local copies of Microsoft Windows, it could, for example, also run a
lightweight Linux virtual appliance hosting a web browser or remote display client for server-based
computing. Whether you need a local operating system, thin-client functionality, a lightweight cloud OS, or
any combination of the three, a NxTop provides the required base platform.
Getting Started
One of the major benefits of a client-hosted virtual desktop approach is that it is very easy to get started and
allows quick assessment of whether it can provide benefits to your organization.
Using this free introductory license, you will likely find that you already have everything you need to get a
proof of concept running in hours. At the end of the trial period, you will have the option to convert to a full
NxTop Enterprise license or request a free NxTop Express license that will allow you to continue using NxTop
on up to 5 PCs free of charge.