You are on page 1of 76

Planning for High Availability

Lesson 7

Skills Matrix
Technology Skill

Objective Domain

Objective #

Planning for Application


Resilience

Plan application
servers and services

1.4

Using Offline Files

Provision data

4.2

Planning for Data


Availability

Plan high availability

5.2

Planning for Data Availability


As the computer component with the
most moving parts and the closest
physical tolerances, hard disk drives
are more prone to failure than
virtually any other element of a data
network.

Backing Up Data
The simplest and most common type
of data availability mechanism is the
disk backup, that is, a copy of a
disks data stored on another
medium.
The traditional medium for network
backups is magnetic tape, although
other options are now becoming
more prevalent, including online
backups.

Shadow Copies
Backups are primarily designed to protect
against major losses, such as drive
failures, computer thefts, and natural
disasters.
The loss of individual data files is a fairly
common occurrence on most networks,
typically due to accidental deletion or user
mishandling.
For backup administrators, the need to
locate, mount, and search backup media
just to restore a single file can be a regular
annoyance.
Windows Server 2008 includes a feature

Shadow Copies
Shadow Copies is a mechanism that
automatically retains copies of files on a server
volume in multiple versions from specific points
in time.
When users accidentally overwrite or delete
files, they can access the shadow copies to
restore earlier versions.
This feature is specifically designed to prevent
administrators from having to load backup
media to restore individual files for users.
Shadow Copies is a file-based fault tolerance
mechanism that does not provide protection
against disk failures, but it does protect against
the minor disasters that inconvenience users
and administrators on a regular basis.

The Shadow Copies Dialog Box

The Settings Dialog Box

The Schedule Dialog Box

Clients for Shadow Copy


Once the server begins creating shadow copies,
users can open previous versions of files on the
selected volumes, either to restore those that
they have accidentally deleted or overwritten,
or to compare multiple versions of files as they
work.
To access the shadow copies stored on a server,
a computer must be running the Previous
Versions Client. This client is included with
Windows Vista, Windows XP Service Pack 2,
Windows Server 2008, and Windows Server
2003.
For pre-SP2 Windows XP and Windows 2000
computers, you can download the client from
the Microsoft Download Center at

The Previous Versions Tab


of a Files Properties Sheet

Offline Files
Offline Files works by copying
server-based folders that users select
for offline use to a workstations local
drive.
The users then work with the copies,
which remain accessible whether the
workstation can access the server or
not.
No matter what the cause, be it a
drive malfunction, a server failure, or
a network outage, the users can

Offline Files
When the workstation is able to reconnect
to he server drive, a synchronization
procedure.
replicates the files between server and
workstation in whichever direction is
necessary.
If the user on the workstation has modified
the file, the system overwrites the server
copy with the workstation copy.
If another user has modified the copy of
the file on the server, the workstation
updates its local copy.
If there is a version conflict, such as when
users have modified both copies of a file,

Offline Files
To use Offline Files, the user of the
client computer must first activate
the feature, using one of the
following procedures:
Windows XP and Windows Server
2003 Open the Folder Options
control panel, click the Offline Files
tab, and select the Enable Offline
Files checkbox.
Windows Vista and Windows Server
2008 Open the Offline Files control

Disk Redundancy
Disk redundancy is the most common type
of high availability technology currently in
use.
Even organizations with small servers and
modest budgets can benefit from
redundant disks, installing two or more
physical disk drives in a server and using
the disk mirroring and
RAID-1 and RAID-5 capabilities are built
into Windows Server 2008.
For larger servers, external disk arrays and
dedicated RAID hardware products can
provide more scalability, better

Data High Availability Solution


How much data do you have to
protect?
How critical is the data is to the
operation of your enterprise?
How long an outage can your
organization comfortably endure?
How much you can afford to spend?

Data High Availability Solution


Remember that none of these high availability
mechanisms are intended to be replacements for
regular system backups.
For document files that are less than critical, or
files that see only occasional use, it might be
more economical to keep some spare hard drives
on hand and rely on your backups.
If a failure occurs, you can replace the
malfunctioning drive and restore it from your
most recent backup, usually in a matter of hours.
However, if access to server data is critical to
your organization, the expense of a RAID solution
might be seen as minimal, when compared to the
lost revenue or even more serious consequences
of a disk failure.

Planning for Application Availability


High availability is not limited to
data; applications, too, must be
available for users to complete their
work.

Application Resilience
Refers to the ability of an application
to maintain its own availability by
detecting outdated, corrupted, or
missing files and automatically
correcting the problem.

Enhancing Application Availability


Using Group Policy
Administrators can use Group Policy to
deploy application packages to computers
and/or users on the network.
When you assign a software package to a
computer, the client installs the package
automatically when the system boots.
When you assign a package to a user, the
client installs the application when the
user logs on to the domain, or when the
user invokes the software by doubleclicking an associated document file.

Enhancing Application Availability


Using Group Policy

Both of these methods enforce a


degree of application resilience,
because even if the user manages to
uninstall the application, the system
will reinstall it during the next startup
or domain logon.
This is not a foolproof system,
however. Group Policy will not
recognize the absence of a single
application file, as some other
mechanisms do.

Windows Installer 4.0


The component in Windows Server
2008 that enables the system to
install software packaged as files
with a .msi extension.
One of the advantages of deploying
software in this manner is the built-in
resiliency that Windows Installer
provides to the applications.

Windows Installer 4.0


When you deploy a .msi package, either
manually or using an automated solution,
such as Group Policy or System Center
Configuration Manager 2007, Windows
Installer creates special shortcuts and file
associations that function as entry points
for the applications contained in the
package.
When a user invokes an application using
one of these entry points, Windows
Installer intercepts the call and verifies the
application to make sure that its files are

Server Clustering
Server clustering can provide two
forms of high availability on an
enterprise network.
In addition to providing fault
tolerance in the event of a server
failure, it can provide network load
balancing for busy applications.

Failover Cluster
Servers themselves can suffer failures that
render them unavailable.
Hard disks are not the only computer
components that can fail, and one way of
keeping servers available is to equip them
with redundant components other than
hard drives.
The ultimate in fault tolerance, however, is
to have entire servers that are redundant
so that if anything goes wrong with one
computer, another one can take its place
almost immediately.
In Windows Server 2008, this is known as

Failover Cluster Requirements


Duplicate servers.
Shared storage.
Redundant network connections.

Failover Cluster Requirements

Duplicate operating system.


Same applications.
Same updates.
Same Active Directory domain.

Failover Cluster Configuration


The Failover Cluster Management console
is included as a feature with Windows
Server 2008.
You must install it using the Add Features
Wizard in Server Manager.
Afterward, you can start to create a cluster
by validating your hardware configuration.
The Validate a Configuration Wizard
performs an extensive battery of tests on
the computers you select, enumerating
their hardware and software resources and
checking their configuration settings.
If any elements required for a cluster are
incorrect or missing, the wizard lists them
in a report.

The Failover Cluster Management


Console

The Validate a Configuration Wizard

The Select Servers or a Cluster Page

The Testing Options Page

The Confirmation Page

The Summary Page

The Failover Cluster Validation Report

Creating a Failover Cluster


After you validate your cluster
configuration and correct any
problems, you can create the cluster.
A failover cluster is a logical entity
that exists on the network, with its
own name and IP address, just like a
physical computer.

The Create Cluster Wizard

The Select Servers Page

The Access Point for Administering the


Cluster Page

The Confirmation Page

New Created Cluster in the Failover Cluster


Management Console

Network Load Balancing (NLB)


Another type of clustering, useful
when a Web server or other
application becomes overwhelmed
by a large volume of users, is
network load balancing (NLB), in
which you deploy multiple identical
servers, also known as a server
farm, and distribute the user traffic
evenly among them.

Creating an NLB Cluster


To create and manage NLB clusters
on a Windows Server 2008 computer,
you must first install the Network
Load Balancing feature using Server
Manager.
This feature also includes the
Network Load Balancing Manager
console.
Once you create the NLB cluster
itself, you can add servers to and

Creating an NLB Cluster


The process of implementing an NLB
cluster consists of the following
tasks:
Creating the cluster.
Adding servers to the cluster.
Specifying a name and IP address for
the cluster.
Creating port rules that specify which
types of traffic the cluster should
balance among the cluster servers.

Network Load Balancing Manager


Console

The New Cluster: Connect Page

The New Cluster: Host Parameters


Page

The New Cluster: Cluster IP Addresses


Page

The Add IP Address Dialog Box

The New Cluster: Cluster Parameters


Page

The New Cluster: Port Rules Page

The Add/Edit Port Rule Dialog Box

The Network Load Balancing


Manager Console with Active Cluster

Heartbeats
The servers in an NLB cluster
continually exchange status
messages with each other, known as
heartbeats.
The heartbeats enable the cluster to
check the availability of each server.

Convergence
When a server fails to generate five
consecutive heartbeats, the cluster
initiates a process called convergence,
which stops it from sending clients to the
missing server.
When the offending server is operational
again, the cluster detects the resumed
heartbeats and again performs a
convergence, this time to add the server
back into the cluster.
These convergence processes are entirely
automatic, so administrators can take a
server offline at any time, for maintenance

Load Balancing Terminal Servers


Windows Server 2008 supports the
use of network load balancing for
terminal servers in a slightly different
manner.
For any organization with more than
a few Terminal Services clients,
multiple terminal servers are
required.
Network load balancing can ensure
that the client sessions are

TS Session Broker
One problem inherent in the load
balancing of terminal servers is that a
client can disconnect from a session
(without terminating it) and be assigned
to a different terminal server when he or
she attempts to reconnect later.
To address this problem, the Terminal
Services role includes the TS Session
Broker role service, which maintains a
database of client sessions and enables a
disconnected client to reconnect to the
same terminal server.

Load Balancing Terminal Servers


The process of deploying Terminal
Services with network load balancing
consists of two parts:
Creating a terminal server farm.
Creating a network load balancing
cluster.

The Terminal Services Configuration


Console

The TS Session Broker Tab

The Completed TS Session Broker Tab

Group Policy Settings for TS Session


Broker

Using DNS Round Robin


While TS Session Broker is an effective
method for keeping the sessions balanced
among the terminal servers, it does
nothing to control which terminal server
receives the initial connection requests
from clients on the network.
To balance the initial connection traffic
amongst the terminal servers, you can use
an NLB cluster, as described earlier in this
lesson, or you can use another, simpler
load balancing technique called DNS
Round Robin.

The DNS Manager Console

The New Host Dialog Box

The Advanced Tab of a


DNS Servers Properties Sheet

Summary
In computer networking, high availability
refers to technologies that enable users to
continue accessing a resource despite the
occurrence of a disastrous hardware or
software failure.
Shadow Copies is a mechanism that
automatically retains copies of files on a
server volume in multiple versions from
specific points in time.
When users accidentally overwrite or
delete files, they can access the shadow
copies to restore earlier versions.

Summary
Offline Files works by copying serverbased folders that users select for
offline use to a workstations local
drive.
The users then work with the copies,
which remain accessible whether the
workstation can access the server or
not.

Summary
When you plan for high availability,
you must balance three factors: fault
tolerance, performance, and
expense.
The more fault tolerance you require
for your data, the more you must
spend to achieve it, and the more
likely you are to suffer degraded
performance as a result of it.

Summary
Disk mirroring is the simplest form of
disk redundancy and typically does
not have a negative effect on
performance as long as you use a
disk technology, such as SCSI (Small
Computer System Interface) or serial
ATA (SATA), that enables the
computer to write to both disks at
the same time.

Summary
Parity-based RAID is the most
commonly used high-availability
solution for data storage, primarily
because it is far more scalable than
disk mirroring and enables you to
realize more storage space from your
hard disks.
One way of protecting workstation
applications and ensuring their
continued availability is to run them
using Terminal Services.

Summary
Windows Installer 4.0 is the
component in Windows Server 2008
that enables the system to install
software packaged as files with a
.msi extension.
One of the advantages of deploying
software in this manner is the built-in
resiliency that Windows Installer
provides to the applications.

Summary
A failover cluster is a collection of
two or more servers that perform the
same role or run the same
application and appear on the
network as a single entity.

Summary
The NLB cluster itself, like a failover
cluster, is a logical entity with its own
name and IP address.
Clients connect to the cluster rather
than to the individual computers, and
the cluster distributes the incoming
requests evenly among its
component servers.

Summary
The Terminal Services role includes
the TS Session Broker role service,
which maintains a database of client
sessions and enables a disconnected
client to reconnect to the same
terminal server.

Summary
In the DNS Round Robin technique,
you create multiple resource records
using the same name, with a
different server IP address in each
record.
When clients attempt to resolve the
name, the DNS server supplies them
with each of the IP addresses in turn.
As a result, the clients are evenly
distributed among the servers.

You might also like