You are on page 1of 20

Clustering and Storage with

Windows Server 2003

an

Storage eBook

contents
[
]
Clustering and Storage with Windows Server 2003

This content was adapted from Internet.com's


ServerWatch Web site and was written
by Marcin Policht.

10

15

13

19

2
10
13
15
19

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook.
2008, Jupitermedia Corp.

Server Clustering
Disk Storage (SCSI)
SAN-Based Storage
iSCSI Storage
Conclusion

Clustering and Storage with Windows Server 2003

Clustering and Storage with


Windows Server 2003
By Marcin Policht

n this eBook we're going to take a look at clustering


and storage with Windows Server 2003. You can use
the information we present here as a foundation for
creating high-availability solutions.
Although some of the technologies we will be describing have been available in earlier version of Windows
(as inherent components, addon programs, or third-party
offerings that made their way
into Microsoft's portfolio
through acquisitions) their latest
incarnations are superior from
functionality, stability, and manageability perspectives.

Server Clustering
Two basic approaches to reaching high availability have been
built into the Windows Server
2003 operating system. The
first, known as Server Clustering, requires Windows
Server 2003 Enterprise and Datacenter Editions. The
second one, known as Network Load Balancing (NLB),
was incorporated into all Windows Server 2003 versions (including Standard and Web).
Each represents a unique approach to eliminating "a

single point of failure" in computer system design. They


also share one important underlying feature that serves
as the basis for their redundancy: Both increase availability by relying on multiple physical servers, hosting
identically configured instances of a particular resource
(such as a service or application). The main difference
lies in the way these instances are defined and implemented.
In case of NLB, each instance is
permanently tied to the hosting of its physical server, and it
remains active as long as this
server is functional. In other
words, all of them operate
simultaneously during cluster
uptime. With Server Clustering,
on the other hand, there is only
a single active instance for
each highly available resource,
Jupiterimages
regardless of the total number
of servers that are members of
the entire cluster. The server that currently hosts this
resource becomes its owner and is responsible for processing all requests for its services.
These underlying architectural principles introduce a
number of challenges. Since the NLB cluster consists of
up to 32 instances running in parallel servers, there is a

With Server Clustering, there is only a single active instance for each highly available
resource, regardless of the total number of servers that are members of the entire cluster. The server that currently hosts this resource becomes its owner and is responsible
for processing all requests for its services.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

need for additional mechanisms that enable them to


decide which is responsible for handling the processing
of client requests targeting any of the highly available
resources at any given time. This determination must
be made for every new incoming request and, depending on the configuration, might have to be performed
independently for each of them.

vate network interfaces. This is accomplished by:


Disabling NetBIOS over TCP/IP. Relevant options
are listed in the NetBIOS section on the WINS tab of
the Advanced TCP/IP settings dialog box of the interface properties
Removing file and printer sharing for Microsoft
Networks. Configurable on the General tab of the
interface properties dialog box

With Server Clustering, the equivalent process is trivial


since there is only one instance of each highly available
resource. The cost, however, is increased complexity of
the logic that dictates which member server hosts this
resource, especially following the failures of its previous
owner.

Setting appropriate speed and duplexity mode.


Rather than relying on Autodetect option - Done
from the Advanced tab of the network adapter
Properties dialog box

To function as a unit, servers participating in a cluster


Ensure that statically assigned IP addresses are
(also referred to as nodes) must be able to interact with
used. Instead of using Dynamic
each other. This is accomplished
Host Configuration Protocol or
by setting up redundant network
Automatic Private IP Addressing.
connections so as to minimize
Every cluster contains one
the possibility of failure. Thus,
designated resource, called
There should be no default
each node should have at least
Quorum, implemented as a
gateway. Entries should be
two network adapters. The condedicated disk volume. Most
cleared for the "Use the follownections are organized into two
ing DNS server addresses"
groups, private and public, also
frequently, this volume
options, present on the Internet
referred to as "Internal Cluster
consists of a pair of mirrored
Protocol Properties dialog box
communications only" and "All
disks, which increases level
for the connection
Communications," respectively.
of its fault tolerance.
They are identified and configIt is no longer necessary to disured during cluster installation
able the Media Sensing feature
on each member server.
on Windows Server 2003. This was accomplished by
registry modification on Windows 2000-based cluster
The first one contains links dedicated to internode, intrmembers.
acluster traffic. Although the primary purpose of the
second one is to carry service requests and responses
Despite these extra measures, communication between
between clients and the cluster, it also serves as a backnodes can still fail. This makes it necessary to provide
up to the first one. Depending on the number of nodes
an additional safety mechanism that would prevent a
in a cluster (and your budget), you can employ different
so-called "split-brain" scenario, where individual nodes,
technologies to implement node interconnects. In the
unable to determine status of clustered resources,
simplest case (limited to two nodes), this is possible
attempt to activate them at the same time. This would
with a crossover cable. When a larger number of
violate the principles of server clustering described
servers participate in a cluster (up to a total of eight
above and result in potentially serious implications,
supported by Windows Server 2003 Enterprise and
such as data corruption in the case of disk-based
Datacenter Editions) a preferably dedicated hub or a
resources.
switch is needed.

To optimize internode communication, which is critical


for a cluster to operate properly, we recommended
eliminating any unnecessary network traffic on the pri-

Quorum Designations
To prevent this, every cluster contains one designated
resource, called Quorum, implemented as a dedicated

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

disk volume. Most frequently, this volume consists of a


pair of mirrored disks, which increases the level of its
fault tolerance. Its optimum size is 500 MB (due to
NTFS characteristics), although its use typically constitutes only a fraction of this capacity. Like with other
resources, only one server owns the Quorum at any
given time. The Quorum owner has the ultimate
responsibility for making decisions regarding ownership
of all other resources.
More specifically, nodes exchange "heartbeat" signals,
formatted as User Datagram Protocol (UDP) packets at
pre-configured intervals (every 1.2 seconds) to confirm
their network interfaces are operational. The absence
of two consecutive packets triggers a reaction that is
supposed to address potential cluster problems. In
Windows 2000 Server-based implementations, this consisted of activating all resources on the current owner
of the Quorum and, simultaneously, deactivating them
on all other nodes. This effectively ensured only a single instance of each resource remained online.
However, under certain circumstances, it could lead to
an undesirable outcome.
Although a rather rare occurrence, it is possible for the
Quorum owner to lose connectivity on all of its interfaces and, at the same time, the remaining nodes
remain able to communicate with the client's network.
As the result, user requests will not be able to reach
cluster resources, which are still active but reside on the
node that is no longer accessible. Remaining nodes,
however, would be fully capable of handling these
requests, if they can take ownership of the Quorum
and all other resources.
The introduction of additional logic in the way
Windows Server 2003-based clusters handle the
absence of heartbeat traffic resolved this issue. Rather
than following the legacy procedure when missing
heartbeat signals are detected, nodes first check
whether any of their network interfaces designated as
public are operational and, if so, whether client networks are still reachable. This is accomplished by sending ICMP (Internet Control Message Protocol) echo
requests (i.e., executing PING) to external systems typically the default gateway configured for these interfaces. If the node hosting the Quorum fails any of these
tests, it will voluntarily deactivate all its resources,
including the Quorum. If the remaining nodes discover
their network links are still working, they will have no
4

10 Coolest Features in
Windows Server 2008
by Paul Rubens

here's still plenty of mileage left in Microsoft


Windows Server 2003, but it doesn't hurt to
look ahead. You won't find any killer features
in Windows Server 2008, but that's not to say there's
nothing to get excited about. There's a great deal
that's new, and depending on the set up of your
organization, it's almost certain you'll find some or
all of it extremely valuable.
Any ranking is bound to be subjective, and bearing
that in mind, here are what we believe to be the 10
most interesting new features in Windows Server
2008.
1. Virtualization
Microsoft's Hyper-V hypervisor-based virtualization
technology promises to be a star attraction of Server
2008 for many organizations.
Although some 75 percent of large businesses have
started using virtualization, only an estimated 10
percent of servers are running virtual machines.
This means the market is still immature. For
Windows shops, virtualization using Server 2008
will be a relatively low-cost and low-risk way to dip a
toe in the water.
At the moment, Hyper-V lacks the virtualized infrastructure support virtualization market leader
VMware can provide. Roy Illsley, senior research
analyst at U.K.-based Butler Group, noted that
Microsoft is not as far behind as many people seem
to think. "Don't forget Microsoft's System Center,
which is a fully integrated management suite and
which includes VM Manager. Obviously it only
works in a Wintel environment, but if you have
Server 2008 and System Center, you have a pretty
compelling proposition.
"What Microsoft is doing by embedding virtualization technology in Server 2008 is a bit like embedding Internet Explorer into Windows," said Illsley.
"This is an obvious attempt to get a foothold into the
virtualization market."

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

problem establishing a new Quorum owner and transfer


control of all cluster resources to it.
Besides assisting with resource arbitration following
communication failure, Quorum serves another important function - providing storage for up-to-date cluster
configuration. This configuration resides in two files in
the MSCS folder on the quorum volume - the cluster
hive checkpoint file (Chkxxx.tmp) and Quorum log
(Quolog.log). The first one stores a copy of configuration database, which mirrors the content of the Cluster
registry hive on the server hosting the Quorum resource
and stored in the %SystemRoot%ClusterCLUSDB file on
that server. This database is replicated to all remaining
nodes and loaded into their Registry (maintaining a single "master" copy of this information ensures its consistency). Replication takes place for every new cluster
configuration change, as long as all nodes are operational. If this is not the case, timestamped changes are
recorded in the Quorum log file and applied to configuration database once the offline nodes are brought
back online. Being familiar with these facts is important
when troubleshooting some of the most severe cluster
problems.
As already mentioned, Quorum is implemented as a
volume on a physical disk. However, details of this
implementation vary depending on a number of factors, such as number of nodes, server cluster type, or
storage technology.
Maintaining a single instance of each clustered resource
(ensuring at the same time its fault tolerance and preventing "split-brain" scenarios) is accomplished through
two basic mechanisms, resource virtualization and internode communication.
Resource virtualization requires each clustered service
or application be represented by a number of related
software and hardware components, such as disks, IP
addresses, network names, and file shares, which can
be assigned to any server participating in the cluster
and easily transferred between them, if necessary. This
is made possible by setting up these servers in a very
specific manner, where they can access the same set of
shared storage devices, reside on the same subnet, and
are part of the same domain. For example, to create a
highly available network file share, you would identify a
shared disk drive hosting the share, an IP address (with
corresponding network name) from which the share can
5

10 Coolest continued
At launch, Microsoft is unlikely to have a similar
product to VMware's highly popular VMotion (which
enables administrators to move virtual machines
from one physical server to another while they are
running), but such a product is bound to available
soon after.
2. Server Core
Many server administrators, especially those used
to working in a Linux environment, instinctively dislike having to install a large, feature-packed operating system to run a particular specialized server.
Server 2008 offers a Server Core installation, which
provides the minimum installation required to carry
out a specific server role, such as for a DHCP, DNS,
or print server. From a security standpoint, this is
attractive. Fewer applications and services on the
server make for a smaller attack surface. In theory,
there should also be less maintenance and management with fewer patches to install, and the whole
server could take up as little as 3Gb of disk space
according to Microsoft. This comes at a price there's no upgrade path back to a "normal" version
of Server 2008 short of a reinstall. In fact there is no
GUI at all - everything is done from the command
line.
3. IIS
IIS 7, the Web server bundled with Server 2008, is a
big upgrade from the previous version. "There are
significant changes in terms of security and the
overall implementation, which make this version
very attractive," said Barb Goldworm, president and
chief analyst at Boulder, Colo.-based Focus
Consulting. One new feature getting a lot of attention is the ability to delegate administration of
servers (and sites) to site admins while restricting
their privileges.
4. Role-Based Installation
Role-based installation is a less extreme version of
Server Core. Although it was included in 2003, it is
far more comprehensive in this version. The concept is that rather than configuring a full server
install for a particular role by uninstalling unnecessary components (and installing needed extras), you
simply specify the role the server is to play, and
Windows will install what's necessary - nothing
more. This makes it easy for anyone to provision a

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

be accessed remotely, and target file share, with its


name and access permissions.
Although this sounds complicated and time consuming,
all necessary resources are pre-defined, making this
procedure fairly straightforward. Once resources are
identified and configured (by specifying disk drive letters, assigning unique IP addresses, network names, or
file share characteristics), they can be assigned to any
server participating in the cluster (as long as each one is
capable of supporting them). Resources can then be
easily moved between nodes in case the one currently
hosting them fails.

The Importance of Quorum


Inter-node communication is facilitated through heartbeat signals carried over redundant network connections between cluster members and through Quorum's
presence, which determines how resource ownership
should be handled. As we pointed out, Quorum has
the additional important function of storing the most
up-to-date cluster configuration, copied subsequently
to a dedicated registry hive on each node. Local copies
are referenced when nodes join the cluster during startup. Because of its significance in clustering architecture,
Quorum also serves as the basis for three main server
clustering models:
Single Shared Quorum: Quorum is implemented as
the Physical Disk clustered resource.
Single Local Quorum: Quorum is implemented as
the Local Quorum clustered resource.
Majority Node Set Quorum: Quorum is implemented as the Majority Node Set clustered resource.
Single Shared Quorum clusters are by far most popular
among server cluster implementations. They most
closely match the traditional clustering design (which is
reflected by continuing support for this model since
introduction of Microsoft Cluster Server in Windows NT
4.0 Server Enterprise Edition), offering high-availability
of resources representing wide variety of services and
applications as well as simplicity of installation and configuration.
As their name indicates, Single Shared Quorum clusters
use storage design, which enables them to access the
6

10 Coolest continued
particular server without increasing the attack surface by including unwanted components that will
not do anything except present a security risk.
5. Read Only Domain Controllers (RODC)
It's hardly news that branch offices often lack
skilled IT staff to administer their servers, but they
also face another, less talked about problem. While
corporate data centers are often physically secured,
servers at branch offices rarely have the same physical security protecting them. This makes them a
convenient launch pad for attacks back to the main
corporate servers. RODC provides a way to make an
Active Directory database read-only. Thus, any mischief carried out at the branch office cannot propagate its way back to poison the Active Directory system as a whole. It also reduces traffic on WAN links.
6. Enhanced Terminal Services
Terminal services has been beefed up in Server
2008 in a number of ways. TS RemoteApp enables
remote users to access a centralized application
(rather than an entire desktop) that appears to be
running on the local computer's hard drive. These
apps can be accessed via a Web portal or directly by
double-clicking on a correctly configured icon on
the local machine. TS Gateway secures sessions,
which are then tunnelled over https, so users don't
need to use a VPN to use RemoteApps securely over
the Internet. Local printing has also been made significantly easier.
7. Network Access Protection
Microsoft's system for ensuring that clients connecting to Server 2008 are patched, running a firewall
and in compliance with corporate security policies and that those that are not can be remediated - is
useful. However, similar functionality has been and
remains available from third parties.
8. Bitlocker
System drive encryption can be a sensible security
measure for servers located in remote branch
offices or anywhere where the physical security of
the server is sub-optimal. Bitlocker encryption protects data if the server is physically removed or
booted from removable media into a different operating system that might otherwise give an intruder
access to data that is protected in a Windows

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

same set of disks from every cluster member. While


underlying hardware varies widely (and might involve
such types of technologies as SCSI, SANs, NAS, or
iSCSI, which we will review more closely later), the basic
premise remains the same.
Only one instance of any specific resource is permitted
at any given time within the cluster. The same applies
to Quorum, located on a highly available disk volume,
physically connected via a SCSI bus, Fibre Channel
links, or network infrastructure to all servers participating in the cluster. Ownership of the shared volume is
arbitrated to ensure it is granted only to a single node,
thus preventing other nodes from accessing it at the
same time (such situation would likely result in data corruption).
This arbitration is typically handled using internal SCSI
commands (such as SCSI reserve and SCSI release) as
well as bus, Target, or Logical Unit Number (LUN)
resets. The specifics depend on the type of storage

Besides assisting with resource


arbitration following communication
failure, Quorum serves another
important function - providing storage
for up-to-date cluster configuration.

technology implemented. Note that support for a clustering installation is contingent on strict compliance
with the Hardware Compatibility List (which is part of
the Windows Server Catalog, containing all clustering
solutions certified by Microsoft). Therefore it is critical
that you verify which system you intend to purchase
and deploy. Quorum, in this case, is implemented as
the Physical Disk resource, which requires having a separate volume accessible to all cluster nodes (clustering
setup determines automatically whether the volume
you selected satisfies necessary criteria).
Unfortunately, the majority of hardware required to set
up clustered servers is relatively expensive (although
prices of such systems are considerably lower than they
were a few years ago), especially if the intention is to
ensure redundancy for every infrastructure component,
including Fibre Channel and network devices, such as
7

10 Coolest continued
environment. Again, similar functionality is available from third-party vendors.
9. Windows PowerShell
Microsoft's new(ish) command line shell and scripting language has proved popular with some server
administrators, especially those used to working in
Linux environments. Included in Server 2008,
PowerShell can make some jobs quicker and easier
to perform than going through the GUI. Although it
might seem like a step backward in terms of user
friendly operation, it's one of those features that
once you've gotten used to it, you'll never want to
give up.
10. Better Security
We've already mentioned various security features
built into Server 2008, such as the ability to reduce
attack surfaces by running minimal installations,
and specific features like BitLocker and NAP.
Numerous little touches make Server 2008 more
secure than its predecessors. An example is Address
Space Load Randomization - a feature also present
in Vista - which makes it more difficult for attackers
to carry out buffer overflow attacks on a system by
changing the location of various system services
each time a system is run. Since many attacks rely
on the ability to call particular services by jumping
to particular locations, address space randomization
can make these attacks much less likely to succeed.
It's clear that with Server 2008 Microsoft is treading
the familiar path of adding features to the operating
system that third parties have previously been providing as separate products. As far as the core server product is concerned, much is new. Just because
some technologies have been available elsewhere
doesn't mean they've actually been implemented.
Having them as part of the operating system can be
very convenient indeed.
If you're running Server 2003 then, now is the time
to start making plans to test Server 2008 - you're
almost bound to find something you like. Whether
you decide to implement it, and when, is up to you.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

adapters and switches, or disk arrays and their controllers. The cost might be prohibitive, especially for
programmers whose sole goal is developing clusteraware software or exploring the possibility of migrating
existing applications into clustered environment.
To remediate this issue, Microsoft made such functionality available without specialized hardware setup, by
allowing the installation of a cluster on a single server
with local storage only (also known as a single node
cluster). Obviously, such configuration lacks any degree
of high availability, but it has all features necessary for
application development and testing. Since local disks
are not represented as Physical Disk resources, this
clustering model requires using a distinct resource type
called Local Quorum when running New Server Cluster
Wizard during initial setup, which we will review in
details later.
Despite the benefits mentioned earlier (such as a significant level of high availability and compatibility with a
variety of hardware platforms, applications, and services), Single Shared Quorum has limitations. The first one
is inherent to the technologies used to implement it.
For example, configurations relying on SCSI-based
shared storage are restricted by the maximum length of
the SCSI bus connecting all cluster nodes to the same
disk array (which typically forces you to place them in
the same or adjacent data center cabinets). This distance can be increased considerably by switching to a
Fibre Channel infrastructure, but not without significant
impact on hardware cost. Introducing iSCSI and NAS
into the arsenal of available shared storage choices
provides the same capability at lower prices, but there
are still some caveats that restrict their widespread use
(e.g., NAS devices are not supported as the Quorum
resource). The second limitation is that despite redundancy on the disk level (which can be accomplished
through RAID sets or duplexing, with fault-tolerant
disks and controllers), Single Shared Quorum still constitutes a single point of failure.
There are third-party solutions designed to address
both of these limitations, and with release of Windows
2003 Server-based clustering, Microsoft introduced its
own remedy in the form of Majority Node Set (MNS)
Quorum. Like Local Quorum, MNS is defined as a separate resource that must be selected during cluster
setup with New Server Cluster Wizard. Also like Local
Quorum model, dependency on the shared storage
8

hosting Quorum resource is eliminated, without having


a negative impact on high availability.
The level of redundancy is increased by introducing
additional copies of Quorum stored locally on each
node (in the %SystemRoot%ClusterMNS.%Resource
GUID%$%ResourceGUID%$MSCS folder, where
%ResourceGUID% designates a 128-bit unique identifier assigned to the cluster at its creation). As you can
expect, having more than one Quorum instance
requires a different approach to preventing "splitbrain" scenario. This is handled by defining a different
rule that determines when the cluster is considered
operational (which, in turn, is necessary to make its
resources available for client access). For this to happen, more than the half of cluster nodes must be functioning properly and able to communicate with each
other. The formula used to calculate this number is:
[(total number of nodes in MNS cluster)/2] + 1
where the square brackets denote Ceiling function,
returning smallest integer equal to or larger than the
result of dividing total number of nodes by two. For
example, for a five-node cluster, three nodes would
need to be running and communicating for its
resources to be available (the same would apply to a
four-node cluster). Clearly, setting up a two-node MNS
cluster, although technically possible, does not make
much sense from availability perspective (since one
node's failure would force the other one to shut down
all of its resources). For an MNS cluster to function, at
least two servers (in a three-node cluster) must be operational (note that with a Single Shared Quorum, a cluster might be capable of supporting its resources even
with one remaining node).
Effectively, the rule guarantees that at any given point
there will be no more than a single instance of every
cluster resource. Clustering service on each node is
configured to launch at boot time and to try to establish communication with majority of other nodes. This
process is repeated every minute if the initial attempt
fails.
This solution introduces additional requirements, since
its architecture implies existence of multiple copies of
the clustered data (unlike with the Single Shared
Quorum model), which must be consistently maintained. Although the clustering software itself is respon-

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

sible for replication of Quorum configuration across all


nodes, this does not apply to services and applicationspecific data. In general, there are two ways of handling this task. The first one relies on mechanisms built
into the application (e.g., log shipping in SQL Server
2000/2005 deployments). The second one involves setting up replication on file system or disk block level.
This can be handled through software or hardware, a
topic we plan to elaborate on later in this eBook.
In addition, since clustered resources are virtualized,
some of the restrictions placed on the Single Shared
Quorum model still apply. In particular, for resource
failover to take place, nodes must be able to detect
failure of others through the absence of heartbeat signals. This requires round trip latency between nodes be
no longer than 500 ms -- affecting, in turn, the maximum allowed distance between them. They also must
be members of the same domain and their public and
private network interfaces have to reside on the same
subnets (which can be accomplished through setting
up two VLANs spanning multiple physical locations
hosting cluster nodes).

Furthermore, since Quorum updates are handled via


network file shares called %ResourceGUID%$ (associated with the Quorum location listed earlier), both Server
and Workstation services (LanManServer and
LanManWorkstation, respectively) must be running on
all cluster nodes and File and Printer Sharing for
Microsoft Networks must be enabled for both private
and public network connections.
Thus, when designing architecture it is important to
keep in mind the impact the architectural design will
have on availability of the MNS cluster. For example,
setting up two sites separated by a network link with an
equal number of nodes in each will cause both to fail if
communication between them is severed (since neither
one contains majority of nodes). It might be beneficial
in such situation to set up a third site with a single cluster node in it (and dedicated network links to the other
two sites), dedicated exclusively to establishing majority
node count when needed. Alternatively, you can also
force some of the cluster nodes to host resources,
although this requires manual intervention.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

Disk Storage (SCSI)

o far, we've provided a brief overview of server


clustering, and described its three basic types,
categorized according to characteristics of the
Quorum resource (i.e., Single Shared, Single Local, and
Majority Node Set). Next up, we will examine one of
the most important of clustering components - disk
storage.
The importance of disk storage stems
from its role in the server clustering
architecture. As you might recall from
earlier discussions, the Quorum
resource must be implemented as an
NTFS volume, hosting Quorum log
and Checkpoint files (for more
details, refer to our earlier article).
Just as relevant is the ability to implement the Physical Disk resource (separate from the Quorum resource),
which is required in the overwhelming majority of typical clustered applications.
To comply with server clustering principles, storage must have certain
characteristics. More specifically, the
volumes it hosts must be accessible to all cluster
nodes; a critical requirement for the Single Shared cluster category is concerned. This applies to many deployments but not Single Local or Majority Node Set types.

Cluster Service Communication


The storage must also be able to communicate with
the Cluster Service, an instance of which runs on every
node, via SCSI protocol. This does not limit hardware
choices to SCSI disks, channels, and controllers; however, disks and controllers must be
capable of properly processing (and
sharing a channel for proper transmitting) such SCSI commands as Reserve
(used by individual cluster nodes to
obtain and maintain exclusive ownership of a device), Release (which relinquish reservation of a device, allowing
another cluster node to take ownership of it), and Reset (forcibly removing existing reservation of an individual device or all devices on the bus).
These commands serve a very important purpose - they prevent a situation where two hosts would be permitted to write simultaneously to the
same disk device. This is likely to happen otherwise, considering both
Jupiterimages
hosts share a physical connection to
it. When the first cluster node is brought online, its
Cluster Service (with help of the Cluster Disk Driver
Clusdisk.sys) scans the devices of the shared storage
bus and attempts to bring them online. It issues the
Reserve command to claim ownership. The same com-

A SCSI controller is typically installed in a host system as the host adapter, but it can
also reside in an external storage subsystem.

10

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

mand gets re-sent in subsequent, regular intervals


(every three seconds). This ensures ownership is maintained, and the owning node has exclusive access to all
volumes on the target disk.
Reserve also plays a critical role in cases when network
communication between nodes fails. As mentioned earlier, such a situation is handled by first establishing
which node is the owner of the Quorum (potentially
triggering new election, if the previous owner is no
longer operational) and transferring all cluster resources
to it. A successful outcome of this process relies on two
settings that control SCSI commands issued by cluster
nodes. The first one forces the Quorum owner to renew
its reservation every three seconds. The second one,
inter-node communication failure, causes non-Quorum
owners to initiate bus-wide Reset, followed by a sevensecond waiting period.
If the Quorum remains available after the wait period is
over (which indicates the previous Quorum owner
failed), the challenging node takes over ownership of
the Quorum (by sending Reserve signal) as well as all
remaining resources. Another purpose of Reset command is to periodically terminate reservations to detect
situations in which a node becomes unresponsive (without failing completely). Providing that this is not the
case, reservations are subsequently re-established.
Now that we have established functional requirements
of storage in Single Shared Quorum clusters, let's
review technologies that satisfy criteria outlined above.
Regardless of your choice, the actual hardware selected
must be Microsoft-certified, which can be verified by
referencing Microsoft Windows Server catalog). In general, storage clustering solutions belong to one of four
categories:
Shared SCSI
Fibre Channel Storage Area Networks (SANs)
NAS (Network Attached Storage)
iSCSI

SCSI
SCSI (Small Computer System Interface) is the best
known and most popular storage technology for multidisk configurations. The term SCSI also refers to the
communication protocol, providing reliable block-level
data transport between a host (known as the initiator)
11

and storage (known as the target), which is independent of the way data is stored. Its architecture consists of
a parallel I/O bus shared between multiple (frequently
daisy-chained) devices (including controllers), and
enclosed on both ends with terminators, which prevent
electrical signals from bouncing back (terminators are
frequently built directly into SCSI devices).
A SCSI controller is typically installed in a host system
as the host adapter, but it can also reside in an external
storage subsystem. Each device on the bus is assigned
a unique identifier referred to as SCSI ID that is numbered from 0 to 7 or from 0 to 15, for narrow and wide
SCSI bus types, respectively. In addition to providing
addressing capabilities, the SCSI ID determines priority
level (with an ID 7 being the highest and assigned typically to the controller, ensuring proper bus arbitration).
A limited range of SCSI IDs (which restrict the number
of devices on the bus to 15) is extended through the
assignment of Logical Unit Numbers (LUNs), associated
with each individual storage entity, which is able to
process individual SCSI commands. Typically, they represent individual disks within a storage subsystem, connected to the main SCSI bus via an external SCSI controller. In addition to LUN and SCSI ID, the full address
of such Logical Unit also contains a bus identifier, which
commonly corresponds to a specific SCSI interface
card. A server can have several such cards installed.
The total number of available LUNs ranges from 8 to
254, depending on the hardware support for Large
LUNs. For more information on this subject, refer to
Microsoft Knowledge Base article 310072.
Implementing SCSI technology for the purpose of
shared clustered storage adds an extra layer of complexity to its configuration. Since the bus must be
accessible by clustered nodes, install a SCSI controller
card in each (and disable their BIOS). Furthermore,
since these controllers will be connected to the same
bus, they cannot have identical SCSI IDs. Typically, this
dilemma is resolved by setting one to 7 and the other
to 6, which grants the latter the next-highest priority
level. To ensure the failure of a single component (such
as a device, controller, or host) does not affect the
entire cluster, use external (rather than device's built-in)
terminators. Keep in mind that number of nodes in a
SCSI storage-based clustered implementation cannot
exceed two.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

As part of your design, you should ensure sufficient


level of storage redundancy by implementing RAID,
which enables individual disk failures to not affect overall data accessibility. Although Windows 2000 and 2003
Server products support software-based fault tolerant
RAID configurations (RAID 1 and 5, known also as mirroring and striping with parity, respectively), this
requires setting up target disks as dynamic, which in
turn are not permitted - at least not without installing
third-party products (e.g., Symantec Storage
Foundation for Windows add-in) as shared clustered
storage This restriction does not apply to local cluster
node drives.
Although this means that you must resort to more
expensive, external storage arrays, which implement
hardware-based RAID, you can benefit not only from
significantly better performance but also from
improved functionality, including such features as
redundant hot swappable fans, power supplies, extra
disk cache memory, and more complex and resilient
RAID configurations (such as RAID 10 or 50, which
combine disk mirroring with striping or striping with
parity, protecting from losing data access even in cases
of multiple disk failures).

12

Unfortunately, the SCSI technology, despite its relatively


low cost, widespread popularity, and significant transfer
speeds of up to 320 MBps with SCSI-3 Ultra320 standard is subject to several limitations. They result mainly
from its parallel nature, which introduces a skew phenomenon (where individual signals sent in parallel arrive
at a target at slightly different times), restricting the
maximum length of the SCSI bus (in most implementations, remaining within 25 meters range, requiring
physical proximity of clustered components, which
makes them unsuitable for disaster recovery scenarios).
A recently introduced serial version of SCSI (Serial
Attached SCSI, or SAS) addresses the drawbacks of its
parallel counterpart, but it is unlikely to become a
meaningful competitor to Fibre Channel or iSCSI. The
SCSI bus is also vulnerable to contention issues, where
a device with higher priority dominates communication.
Finally, storage is closely tied to the hosts, which
increases the complexity of consolidation and expansion efforts.
Although shared SCSI technology is a viable option for
lower-end server clustering implementations on the
Windows 2003 Server platform, other types of storage
solutions offer considerable advantages in terms of performance, scalability, and stability.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

SAN-Based Storage

ibre Channel storage area network (FC SANs) represent a considerable shift from the directly
attached storage paradigm. They offer significant
functionality and performance improvements. The basic
idea is to use a network infrastructure for connecting
servers to their disks, allowing physical separation of
the two by far greater distances than was previously
possible. But there are also other,
equally important, advantages of this
separation. Managing storage in
larger environments no longer
requires dealing with each individual
system, as was the case with directly
attached models. Disks are grouped
together, simplifying their administration (e.g., monitoring, backups,
restores, provisioning and expansion)
and making it more efficient, through
such inventions as LAN-free or server-free backups and restores, or
booting from a SAN.
In addition, since large number of
servers and storage devices can participate in the same SAN, it is possible to attach new ones as needed,
making allocation of additional space a fairly easy task.
This is further simplified by the DISKPART.EXE
Windows 2003 Server utility, which is capable of
dynamically extending basic and dynamic volumes, as
explained in Microsoft Knowledge Base Article

Q325590. This is especially true when comparing the


SAN with a SCSI-based setup, where the limited
amount of internal or external connectors and adjacent
physical space available must be taken into account.
Fibre Channel SAN technology leverages SCSI-3 specifications for communication between hosts and target
devices, since its implementation is
based on the SCSI command set.
Their transmission, however, is handled using FC transport protocol.
This is done in a serial manner, typically over fiber optic cabling
(although copper-based media are
allowed), which eliminates distance
limitations inherent to parallel SCSI.
Note, however, that the term "network" should not be interpreted in
the traditional sense, since SANs do
not offer routing capabilities, primarily because they are intended for
high-speed, low-latency communication. SANs also use a distinct end
Jupiterimages
node identification mechanism,
which does not rely on Media Access
Control (MAC) addresses associated with each network
adapter, but instead employs 64-bit (expressed usually
in the form of eight pairs of hexadecimal characters)
World Wide Names (WWN), burned into fibre host bus
adapters (HBAs) by their manufacturers. FC intercon-

FC SANs represent a considerable shift from the directly attached storage paradigm.
They offer significant functionality and performance improvements.

13

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

necting devices handle dynamic address allocation on


the fabric level. In addition, unlike majority of IP-based
networks, FC SANs have primarily asymmetric characters, with active servers on one end connecting mostly
to passive devices, such as disks arrays or tape drives
on the other, arranged in one of the following topologies:
Point-to-point: Links a single host and its storage
device directly via a fiber connection. This type of
configuration is the simplest and least expensive to
deploy and manage, but it lacks the flexibility and
expandability of the other two, since it is conceptually equivalent to SCSI-based directly attached disks. It
is, therefore, rarely implemented.
Shared, also known as Fibre Channel Arbitrated
Loop (FC-AL): Takes the shape of a logical ring (but
physically forming a star), with an FC hub or a loop
switch serving as the interconnecting device. The
design is similar to Token Ring architecture. This similarity is also apparent when it comes to arbitration of
loop usage.
Since FA-CL devices share the same media, whenever one of them needs to communicate with another,
it is must send an arbitration packet around the loop,
which once returned back to the sender, signals
exclusive loop access can be granted. Should conflicts occur when multiple devices attempt to communicate at the same time, the one with the lowest
address wins. Addresses, which differentiate among
all nodes participating in the loop, can be hardcoded or assigned dynamically. The majority of loop
switches provide this capability. Although dynamic
allocation simplifies configuration in multi-node scenarios, it might also cause instability when devices
are restarted or new ones added, since such events
trigger loop reinitialization and node readdressing.
Although considerably less expensive than their
switch-based counterparts, FC-ALs are not as efficient. Access to fabric is shared across all interconnected devices, which allows only two of them communicate at any given time. They are also not as scalable and support fewer nodes - the maximum is 126.
As with SCSI-based shared storage, FC-AL-based
Windows 2003 Server clusters are limited to two
nodes. In addition, Microsoft recommends using arbi-

14

trated loops for individual cluster implementations,


rather than sharing them with other clusters or nonclustered devices. Larger or shared implementations
require switched configuration.
Switched, referred to as Switched Fibre Channel
Fabric (FC-SW): These networks use FC switches
functioning as interconnecting devices. This topology
addresses the efficiency limitations of the loop configuration by allowing simultaneous, dedicated paths
at the full wire speed between any two Fibreattached nodes. This is based on the same principle
as traditional LAN switching. Scalability is greatly
increased due to hierarchical, fully redundant architecture. It consists of up to three layers with core
employing highest speed and port density switches,
distribution relying on midrange hardware, and
access characterized by low-end switches, arbitrated
loops, and point-to-point connections.
Switches keep track of all fabric-attached devices,
including other switches, in federated and cascaded
configurations, using 3-byte identifiers. This sets the
theoretical limit of roughly 16 million unique addresses. Stability is improved as well, since restarts and
new connections are handled gracefully, without
changes to an already established addressing
scheme or having a negative impact on the status of
the fabric. This is partially because of the introduction
of less disruptive, targeted LUN and SCSI ID resets,
which are attempted first before resorting to the buswide SCSI Reset command. Previously, this was the
only available option in Windows 2000 Server cluster
implementations. Keep in mind, however, that the
availability of this feature depends on the vendordeveloped HBA specific miniport driver, which must
be written specifically to interact with the Microsoftprovided StorPort port driver. This is a new feature in
Windows 2003 Server. It is designed specifically to
take advantage of performance enhancing capabilities of FC adapter, rather than legacy SCSIPort.
Increased performance, flexibility, and the reliability of
switched implementations come with their own set of
drawbacks. Besides considerably higher cost (compared to arbitrated loops) and interoperability issues
across components from different vendors, one of the
most significant ones is the increased complexity of
configuration and management. In particular, it is fre-

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

quently necessary to provide an appropriate degree of


isolation across multiple hosts connected to the same
fabric and shared devices with which they are supposed to interact.
As mentioned earlier, this exclusive access is required
to avoid data corruption, which is bound to happen
with unarbitrated, simultaneous writes to the same disk
volume. In general, three mechanisms deliver this functionality - zoning, LUN masking (known also as selective
presentation), and multipath configurations.
Zoning can be compared to Virtual LANs (VLANs) in
traditional networks, since it defines logical boundaries
(known in SAN terminology as zones) that encompass
arbitrarily designated switch ports. Zone definitions in
clustered deployments are typically stored and
enforced by the switch port ASIC (Application-Specific
Integrated Circuits) firmware, with communication permitted only between nodes attached to the switch
ports that belong to the same zone. They can also be
implemented by referencing WWN of host bus
adapters. In addition to preventing accidental data corruption, zoning offers also an additional level of security. It protects the server from unauthorized access. In
clustered configurations, cluster nodes, along with the
shared disks that constitute clustered resources, should
belong to the same zone.
LUN (an acronym for Logical Unit Number, describing a
logical disk defined in a FC SAN) masking makes it
possible to limit access to individual, arbitrarily selected
LUNs within a shared storage device. Such functionality
is typically required in configurations involving large
multidisk systems, where port-level zoning does not
offer sufficient granularity. LUN masking provides necessary isolation in cases of overlapping zones, where
hosts or storage devices belong to more than one
zone. The relevant configuration is performed and
stored on the storage controller level.
Multipath technology is the direct result of the strive for
full redundancy in SAN environment. Such redundancy
is available on the storage side (through fault-tolerant
disk configurations, dual controllers with their own dedicated battery-backed caches and power supplies) and
on the server side (through server clustering, with each
of the member servers featuring dual, hot-swappable
components). It is reasonable to expect the same when

15

it comes to SAN connectivity.


Unfortunately, the solution is not as simple as installing
two FC host bus adapters (HBAs) and connecting them
to two redundant switches, each of which in turn,
attaches to separate FC connections on the storage
controller. This is because without additional provisions,
Windows would detect two distinct I/O buses and separately enumerate devices connected to each (resulting
in a duplicate set of drives presented to the operating
system), which could potentially lead to data corruption. To resolve this issue, Microsoft Windows 2003
Server includes native support for Multipath I/O, which
makes it possible to connect dual HBAs to the same
target storage device with support for failover, failback,
and load balancing functionality.
Each implementation of a Windows 2003 Server cluster
must belong to a dedicated zone, to eliminate potential adverse effect of the disk access protection mechanism included in the clustering software on other
devices. This does not apply, however, to storage controllers, which can be shared across multiple zones, as
long as they are included on the Cluster/Multi-Cluster
Device HCL. In addition, you should avoid collocating
disk and tape devices in the same zone, as the SCSI
bus reset commands can interfere with normal tape
operations.
Remember, the rule regarding consistent hardware and
software setup across all cluster nodes extends to SAN
connections - including host bus adapter models, their
firmware revision levels, and driver versions.
You should also ensure that automatic basic disk volume mounting feature is disabled. This does not apply
to volumes residing on dynamic disks or removable
media, which are always automatically mounted. Earlier
versions of Windows would spontaneously mount every
newly detected volume. In a SAN environment, this
could create a problem if zoning or LUN masking was
misconfigured or if prospective cluster nodes had
access to the shared LUNs prior to installation of the
clustering software. This feature is configurable, and
disabled by default, in Windows 2003 Server. Running
the MOUNTVOL command or using AUTOMOUNT
option of the DISKPART utility can control it.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

iSCSI Storage

o far, we discussed the two most common choices


- direct-attached SCSI (with popularity resulting
from its long-lasting, widespread commercial presence and low pricing) and Fibre Channel storage-area
networks (FC SANs), which are frequently chosen
because of their superior performance and reliability.
Unfortunately, the cost associated
with FC SAN deployments is prohibitive for most smaller or lesscritical environment, whose
requirements cannot be satisfied
with parallel SCSI because of its
performance and scalability limitations. The introduction of iSCSI
resolves this dilemma by combining the benefits of both technologies and at the same time avoiding their biggest drawbacks.
iSCSI is an acronym derived from
the term Internet SCSI, which succinctly summarizes its basic premise. iSCSI uses IP packets to carry
SCSI commands, status signals,
and data between storage devices
and hosts over standard networks. This approach offers
tremendous advantage by leveraging existing hardware
and cabling (as well as expertise). Although iSCSI frequently uses Gigabit Ethernet, with enterprise class

switches and specialized network adapters (containing


firmware that processes iSCSI-related traffic, offloading
it from host CPUs), its overall cost is lower than equivalent Fibre Channel deployments. At the same rate,
however, features, such as addressing or automatic
device discovery built into FC SAN infrastructure, must
be incorporated into iSCSI specifications and implemented in its components.
iSCSI communication is carried
over a TCP session between an
iSCSI initiator (for which functionality is provided in Windows 2003 in
the form of software or a mix of
HBA firmware and Storport miniport driver) and an iSCSI target
(such as a storage device), established following a logon sequence,
during which session security and
transport parameters are negotiated. These sessions can be made
persistent so they are automatically restored after host reboots.
Jupiterimages

On the network level, both initiator and target get assigned unique
IP addresses, which allow for node identification. With
node identification, the target is actually accessed by a
combination of IP address and port number, which is
referred to as portal. In the iSCSI protocol, addressing

Unfortunately, the cost associated with FC SAN deployments is prohibitive for most
smaller or less-critical environments, whose requirements cannot be satisfied with parallel SCSI because of its performance and scalability limitations.

16

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

is typically handled with iSCSI Qualified Name (IQN)


convention. Its format consists of the type identifier
(i.e., "iqn."); registration date field (in the month-year
notation); followed by the period and domain in which
the name is registered (in reversed sequence); the
semicolon; and the host (or device) name, which can be
either autogenerated (as is the case with Microsoft
implementation, where it is derived from the computer
name), preassigned, or chosen arbitrarily, serving as a
descriptor providing such information as device model,
location, purpose, or LUN.
Targets are located either by statically configuring software initiator, by specifying target portal parameters
(and corresponding logon credentials), by leveraging
functionality built into HBAs on the host, or discovered
automatically, using information stored on an Internet
Storage Name Server (iSNS). This server offers a centralized database of iSCSI resources, where iSCSI storage devices are able to register parameters and status,
which subsequently can be referenced by initiators.
Access to individual records can be restricted based on
discovery domains, serving a purpose similar to FC
SAN zoning.
In a typical Microsoft iSCSI implementation, the initiator
software running on a Windows host server (with a
compatible NIC or an HBA that supports Microsoft
iSCSI driver interface), is used to mount storage volumes located on iSCSI targets and registered with iSNS
server.
Installation of the initiator includes iSNS client and
administrative features, in the form of the iSCSI Initiator
applet in the Control Panel and Windows Management
Instrumentation and iSCSI Command Line interface
(iSCSICLI). The software-based initiator lacks some of
the functionality that might be available with hardwarebased solutions (such as support for dynamic volumes
or booting from iSCSI disks).
To provide a sufficient level of security and segregation,
consider isolating iSCSI infrastructure to a dedicated
storage network (or separating the shared environment
with VLANs), as well as applying authentication and
encryption methods. With Microsoft implementation,
authentication (as well as segregation of storage) is
handled with Challenge Handshake Authentication
Protocol (CHAP), relying on a password shared

17

between an initiator and a target, providing that the


latter supports it. Communication can be encrypted
directly on end devices, using built-in features of highend iSCSI HBAs, third-party encryption methods, or
Microsoft's version of IPSec.
Although network teaming is not supported on iSCSI
interfaces, it is possible to enable communication
between an initiator and a target via redundant network paths that accommodate setup with multiple local
NICs or HBAs and separate interconnects for each.
Implementing multiple connections per session (MCS),
which leverage a single iSCSI session, can do this. It
can also be done with Microsoft Multipath I/O (MPIO),
which creates multiple sessions. The distribution of I/O
across connections (applied to all LUNs involved in the
same session) or sessions (referencing individual LUNs),
for MSC and MPIO, depends on Load Balance Policies
configured by assigning Active or Passive type to each
of network paths. This results in one of the following
arrangements:
Fail Over Only uses a single active path as the primary and treats all others as secondaries, which are
attempted in round-robin fashion in case the primary
fails. The first available one found becomes the primary.
Round Robin distributes iSCSI communication
evenly to all paths in round-robin fashion.
Round Robin with Subset functions with one set of
paths in the Active mode and the other remaining
Passive. The traffic is distributed according to the
round robin algorithm across all active paths.
Weighted Path selects a single active path by picking the lowest value of arbitrarily assigned weight
parameter.
Least Queue Depth, available only with MCS, sends
traffic to the path with the fewest number of
requests.
The multipathing solution selected depends on a number of factors, such as support on the target side,
required level of granularity of Load Balance Policy
(individual LUN or session level), and hardware components (MCS is recommended in cases where a software-

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

based initiator - without presence of specialized HBAs


on the host side - is used). Regardless of your decision,
take advantage of this functionality as part of your clustering deployment to increase the level of redundancy.
When incorporating iSCSI storage into your Windows
2003 Server cluster implementation (note that Microsoft
does not support it on Windows 2000), also ensures
that components on the host side fully comply with
iSCSI device logo program specifications and basic
clustering principles. Take domain and network
dependencies into account. Also bear in mind that
besides SCSI RESERVE and RELEASE commands
(which provide basic functionality), iSCSI targets must
support SCSI PERSISTENT RESERVE and PERSISTENT
RELEASE to allow for all of the Load Balance policies
and persistent logons.
The latter requires a persistent reservation key be configured on all cluster nodes. Choosing an arbitrary 8byte value, with the first 6 bytes unique to each cluster

18

and the remaining 2 bytes varying between its nodes,


does this. Data is entered in the
PersistentReservationKey REG_BINARY entry of the
HKLMSystemCurrentControlSetServicesMSiSCDSMPersi
stentReservation registry key on each cluster member.
In addition, the UsePersistentReservation entry of
REG_DWORD type is set to 1 in the same registry location. You should also enable Bind Volumes Initiator
Setting (in the Properties dialog box of the iSCSI
Initiator Control Panel applet), which ensures all iSCSI
hosted volumes are mounted before the Cluster
Service attempts to bring them online.
To avoid network congestion-related issues, consider
setting up dedicated Gigabit Ethernet network or
implementing VLANs with non-blocking switches supporting Quality of Service. Optimize bandwidth utilization, by implementing Jumbo frames and increasing
value of Maximum Transmission Unit.

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

Clustering and Storage with Windows Server 2003

Conclusion
We've reviewed the general principles of server clustering and presented hardware and software criteria that must
be taken into consideration in its design. While the cost of implementing this technology has decreased significantly in recent years, making it affordable outside of high-end environments, there are still scenarios where its use
might not be economically viable (such as in development, testing, or training).
Fortunately, it is possible to overcome constraints imposed by its storage or network requirements without any significant hardware investments by leveraging widely popular server virtualization methodology.

This content was adapted from Internet.com's ServerWatch Web site and was written by Marcin Policht.

Internet.com eBooks bring together the best in technical information, ideas and coverage of important IT
trends that help technology professionals build their knowledge and shape the future of their IT organizations.
For more information and resources on storage, visit any of our category-leading sites:
www.Enteprisestorageforum.com
www.internetnews.com/storage
www.linuxtoday.com/storage
www.databasejournal.com
http://news.earthweb.com/storage
http://www.internet.com/storage
For the latest live and on-demand Webcasts on storage, visit: www.internet.com/storage

19

Clustering and Storage with Windows Server 2003, an Internet.com Storage eBook. 2008, Jupitermedia Corp.

You might also like