Professional Documents
Culture Documents
To realize the full potential of this Sybex electronic book, you must have Adobe Acrobat Reader with
Search installed on your computer. To find out if you have the correct version of Acrobat Reader, click on
the Edit menu—Search should be an option within this menu file. If Search is not an option in the Edit
menu, please exit this application and install Adobe Acrobat Reader with Search from this CD (double-
click rp500enu.exe in the Adobe folder).
Navigation
Navigate through the book by clicking on the headings that appear in the left panel;
the corresponding page from the book displays in the right panel.
Study Guide
Second Edition
Brad Hryhoruk
Diana Bartley
Quentin Docter
Copyright © 2002 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94501. World rights reserved. No part of this
publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but not limited to photo-
copy, photograph, magnetic, or other record, without the prior agreement and written permission of the publisher.
ISBN: 0-7821-4087-4
SYBEX and the SYBEX logo are either registered trademarks or trademarks of SYBEX Inc. in the United States and/or other
countries.
Screen reproductions produced with FullShot 99. FullShot 99 © 1991–1999 Inbit Incorporated. All rights reserved.
FullShot is a trademark of Inbit Incorporated.
The CD interface was created using Macromedia Director, COPYRIGHT 1994, 1997–1999 Macromedia Inc. For more
information on Macromedia and Macromedia Director, visit http://www.macromedia.com.
Internet screen shot(s) using Microsoft Internet Explorer 5.0 reprinted by permission from Microsoft Corporation.
SYBEX is an independent entity from Citrix Systems, Inc., and not affiliated with Citrix Systems, Inc. in any manner. This
publication may be used in assisting students to prepare for a Citrix Certified Administrator Exam. Neither Citrix Systems,
its designated review company, nor SYBEX warrants that use of this publication will ensure passing the relevant exam. Cit-
rix is either a registered trademark or trademark of Citrix Systems, Inc. in the United States and/or other countries.
TRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from descriptive terms
by following the capitalization style used by the manufacturer.
The author and publisher have made their best efforts to prepare this book, and the content is based upon final release soft-
ware whenever possible. Portions of the manuscript may be based upon pre-release versions supplied by software manu-
facturer(s). The author and the publisher make no representation or warranties of any kind with regard to the completeness
or accuracy of the contents herein and accept no liability of any kind including but not limited to performance, merchant-
ability, fitness for any particular purpose, or any losses or damages of any kind caused or alleged to be caused directly or
indirectly from this book.
10 9 8 7 6 5 4 3 2 1
Sybex is proud to have served as a cornerstone member of CompTIA’s Server+ Advisory Committee.
Just as CompTIA is committed to establishing measurable standards for certifying individuals who will
support server environments in the future, Sybex is committed to providing those individuals with the
skills needed to meet those standards. By working alongside CompTIA, and in conjunction with other
esteemed members of the Server+ committee, it is our desire to help bridge the knowledge and skills
gap that currently confronts the IT industry.
In the year since its release, the Server+ has gained industry-wide recognition as a solid indicator of
competency in server technologies. Microsoft recently incorporated the Server+ certification into their
new MCSA (Microsoft Certified Systems Associate) program as an elective option when paired with
CompTIA’s A+ certification. Such integration into vendor-specific certification programs is a strong
endorsement for Server+ and bodes well for those who possess it.
Our authors, editors, and technical reviewers have worked hard to ensure that this Server+ Study Guide
is comprehensive, in-depth, and pedagogically sound. We’re confident that this books will meet and
exceed the demanding standards of the certification marketplace and help you, the Server+ exam can-
didate, succeed in your endeavors.
Neil Edde
Associate Publisher—Certification
Sybex, Inc.
—Brad Hryhoruk
—Quentin Docter
I would like to thank my wife Kara for her unwavering support and uncon-
ditional love. I would also like to thank the entire Sybex crew. You are all
great to work with, and very under appreciated.
—Quentin Docter
to you as well as a means of finding out if you remember the major features
discussed.
Exam essentials are brief statements (one sentence) that reemphasize the
most important points that you need to be aware of prior to taking the exam.
Each statement is followed by a brief explanation of why this point is essen-
tial. Be sure to know these essentials before proceeding to the next chapter.
Key terms are exactly what the name implies: a collection of important
terms unique to the chapter and exam. They are defined within the context
of the chapter and then sorted into a list at the end of the chapter. You need
to be aware of these terms as well as their meanings in order to successfully
challenge the Server+ Exam.
The most significant feature in our Study Guides is the practice exam.
Each chapter includes 20 review questions at the end. These practice ques-
tions test your comprehension of the information and key details covered in
each chapter. It is imperative that you work through these chapter tests.
They not only help you remember the information presented in the chapters,
but also assist you in preparing for the real Server+ Exam.
Don’t just study the questions and answers—the questions on the actual
exam will be different from the practice ones included in this book and on the
CD. The exam is designed to test your knowledge of a concept or objective, so
use this book to learn the objective behind the question.
allocated. All the questions are multiple choice and can contain one or
more answers. The questions are often tricky, often with several possibly
correct answers. You must select the most correct answer. Be sure to read
each question carefully. The Server+ Exam at this time is not adaptive. This
means that you can skip questions and come back to them at a later point in
the exam. CompTIA has not announced a date that the Server+ Exam will
become adaptive, or in fact whether it will at all.
Exam Objectives
A s with the other CompTIA certifications, a series of exam objectives or
topics have been identified by the Advisory Committee as being key to
becoming certified as a competent technician. In the Server+ certification,
these objectives fall under seven major areas: installation, configuration,
upgrading, proactive maintenance, environment, troubleshooting and prob-
lem determination, and disaster recovery. Each key area is weighted on the
Exam objectives are subject to change at any time without prior notice and
at CopmTIA’s sole discretion. Please visit the Certification page of CompTIA
website at www.comptia.org for the most current listing of exam objectives.
exam differently. The exam weights are set to focus on the areas that a server
technician needs to be most knowledgeable in.
Behind every computer industry exam you are sure to find exam objec-
tives—the broad topics on which the exam developers want to ensure your
competency. The official Server+ exam objectives are listed here.
best practices; confirm that the upgrade has been recognized; review
and baseline; document the upgrade
3.10 Upgrade UPS
Perform upgrade checklist including: locate and obtain latest test
drivers, OS updates, software, etc.; review FAQs, instructions, facts
and issues; test and pilot; schedule downtime; implement using ESD
best practices; confirm that the upgrade has been recognized; review
and baseline; document the upgrade
Use the technique of hot swap, warm swap and hot spare to ensure
availability
Use the concepts of fault tolerance/fault recovery to create a disaster
recovery plan
Develop disaster recovery plan
Identify types of backup hardware
Identify types of backup and restoration schemes
Confirm and use off site storage for backup
Document and test disaster recovery plan regularly, and update
as needed
7.2 Restoring
Identify hardware replacements
Identify hot and cold sites
Implement disaster recovery plan.
A. Adapter grouping
E. Adapter teaming
3. If you have a RAID 3 system made up of four 20GB drives, how much
usable disk storage space would you have?
A. 80GB
B. 60GB
C. 40GB
D. 20GB
4. You want to filter packets of certain TCP/IP types coming in from and
going out to the Internet. What type of server application do you
need?
A. Firewall
B. Proxy server
C. Router
D. Gateway
B. L2
C. L3
D. L4
A. TCP/IP
B. NetBEUI
C. IPX/SPX
D. AppleTalk
A. DHCP
B. DNS
C. UDP
D. SMTP
A. 7
B. 5
C. 3
D. 6
A. 183.239.179.171.
B. 127.0.0.0.
C. 240.64.0.24
D. 172.16.0.0.
A. SNMP
B. SMTP
C. LDAP
D. POP3
A. Thinnet
B. Thicknet
C. UTP
D. STP
B. Three
C. Two
D. One
13. Your server’s backups are taking so long that you find you must start
them as you’re leaving work for the day and they often don’t finish
until noon the next day. What are some options you can consider
(select all that apply)?
A. Set up differential backups.
A. EDO
B. ECC
C. RD RAM
D. SIMM
Copyright ©2002 SYBEX, Inc., Alameda, CA www.sybex.com
xlii Assessment Test
15. You want to set up your computer room so that an exact duplicate
exists in a different building and you somehow replicate the informa-
tion on the servers in your production room to this duplicate room.
What kind of function are you performing?
A. Backups
B. Fault tolerance
C. High-availability
D. Disaster recovery
16. Single mode fiber optics uses which of the following as a light source?
A. Laser
B. LED
C. Fluorescent
D. Incandescent
A. Error checking
C. Determining parity
D. Installing chips
18. What are some common diagnostic tools that you can utilize no
matter what NOS you’re working with (choose all that apply)?
A. Event logs
B. TCP/IP software
C. FDISK
D. BIOS utilities
A. Through jumpers
B. On the motherboard
D. With a CD-ROM
A. Intel Itanium
B. AMD Duron
C. Intel Celeron
D. AMD Athlon
21. Name some areas of concern to look at when you are attempting to
diagnose system bottlenecks (select all that apply).
A. IRQ conflicts
B. CPU speed
D. SCSI version
22. What kind of hard disks will typically be installed in a RAID 5 system?
A. ATA
B. IDE
C. SCSI
D. ESD
23. Most servers today are equipped with what kind of system memory
chips?
A. SIMMs
B. ECC SIMMs
C. EDO SIMMs
D. DIMMs
A. They provide a clear picture of what the service techs have been
doing.
B. They provide a background of what has been done to a computer.
B. Two
A. It is affected by EMI.
B. It is affected by heat.
28. You have a single network card with four ports on it. What can that
card not be configured to do?
A. Adapter Load Balancing
B. Adapter teaming
29. Four network cards grouped together for Load Balancing will have
how many IP addresses?
A. Four
B. Three
C. Two
D. One
A. Category 3
B. Category 5
C. Category 4
31. You have just purchased a motherboard that supports dual proces-
sors. Which Pentium III processors can be used on the board?
A. Any Xeon with any P-II
C. Any P-III
D. Electrostatic discharge.
34. When carrying memory chips from one place to another, what type of
ESD equipment should you use?
A. Wrist strap.
B. ESD vest.
C. Antistatic bag.
C. The air space between the ceiling and the actual roof of a building
19. C. A PCI bus is configured through the system BIOS. See Chapter 3
for further information.
20. A. The Intel Itanium is a server-specific processor. All others listed
were originally created for a desktop computer. See Chapter 3 for fur-
ther information.
21. B, C, D. A CPU’s speed can create a bottleneck if the applications
and users trying to access the computer outpace the speed with which
the processor can answer requests. In SCSI you’ll also need to be con-
cerned about the hard disk’s RPMs and the SCSI version level. See
Chapter 12 for further information.
22. C. In almost all cases (except systems that utilize software RAID)
you’ll use SCSI drives in your RAID array. See Chapter 4 for further
information.
23. D. Most commercial servers today come equipped with 72-pin Dual
Inline Memory Modules (DIMMs). The reason for this is twofold: you
get 64-bit memory and SIMMs need to be installed in pairs whereas
DIMMs can be installed singly. See Chapter 3 for further information.
24. B. Maintenance logs provide a background of what has been done to
a computer. See Chapter 9 for further information.
25. C. When cabling a building, you should check the local building
codes. These codes will vary by locality. See Chapter 6 for further
information.
26. B. There is a 50-ohm terminator at each end of the bus on a Thinnet
network. See Chapter 6 for further information.
27. C. Fiber cable is made of glass or plastic. See Chapter 6 for further
information.
28. C. Fault tolerance requires more than one card, not more than one
port. See Chapter 6 for further information.
29. D. A group of network cards used in Load Balancing will have one IP
address. See Chapter 6 for further information.
30. B. Category 5 is the only option that supports transfer speeds of
1000Mbps. See Chapter 6 for further information.
31. B. With Pentium III processors, the multiplier and the FSB must
match. See chapter 3 for further information.
32. D. With parity, if it is determined that there has been some corruption,
the system is halted. See Chapter 12 for further information.
33. D. ESD is electrostatic discharge—rapid discharge of static electricity
from one conductor to another of a different potential. If your body is
holding a static charge and you touch an electronic component, that
discharge can seriously damage the electronics. See Chapter 15 for
further information.
34. C. Antistatic bags are used to carry electronic equipment from one
place to another. This will prevent ESD from damaging the chips. See
Chapter 15 for further information.
35. C. The plenum is the space created for air circulation between a
drop-down ceiling and the roof, or under a raised floor; this space is
commonly used to run cables. See Chapter 6 for further information.
What Is a Server?
W hen preparing for the Server+ exam, one question needs to be
gotten out of the way immediately: What exactly are servers, and what
makes them special enough to deserve an entire exam dedicated to them?
The answer to this requires that the term server itself be defined. Put simply,
there are two key definitions of server in the Information Technology world:
serv·er (sûrvr), n. 1. Computer software designed to assist other
computers on a network by performing tasks for them or providing
information to them. 2. Computer hardware optimized for the task
of running server software.
Each of these definitions needs to be considered separately, along with
its implications for what a “server” is. We’ll take some time in the following
sections to dissect these definitions, taking care to examine servers as
software as well as servers that operate solely as hardware. We will cover
scalability versus expandability, the relationship between security and
Server as Software
Let’s start by examining the first definition. For any computer to function, it
needs an operating system (OS). This is the code that tells the computer how
to function. You know that, of course. You have also probably encountered
the term NOS (network operating system), which is used to describe a server
OS. Things become a bit tricky, though, when we start trying to distinguish
an OS from a NOS.
The reason for this is that by the definitions we’ve just shown you, any OS
that can perform services or share files on the network is a server. Many
of you have used the file sharing capabilities of Windows 98, for instance.
All of Microsoft’s modern OSs have the ability to share out files, and even
to maintain NetBIOS browsing lists that allow computers to find each other
on the network. Even so, we don’t generally think of Windows 98 as a
“server OS,” and neither does the Server+ exam. Rather, the NOS term is
reserved for products such as Novell NetWare, Microsoft Windows 2000
Server, or Sun Solaris.
In order to decide which software you will need as your NOS software,
you will need to examine and consider the following characteristics:
Scalability
Security
Stability
Client prioritization
Reviewing each of these characteristics in full is a good starting place
when considering server hardware for your NOS. As such, we will start by
examining the concept of scalability and how it relates to server performance.
Scalability
Most computers serve only a single master, in that the user working locally on
the machine is the only one giving orders. The user may run one application,
or a number of them, but the amount of computer power a single user needs
is relatively limited, especially as we enter the world of multi-gigahertz pro-
cessing on the desktop. Because only one user is expected to be using the OS
at a time, a normal OS is intended for use on machines with limited resources.
Windows 98, for instance, cannot recognize or use more than one processor.
Hub
Server Server
Data
Hub
Server Server
Data Data
Security
Network operating systems also are generally far more secure (or at least
securable) than client operating systems. This enhanced security can take the
form of a username/password database, access restrictions on files or
services, or any number of system security policies.
One of the odd things about the Server+ Exam is that, because most
of the questions follow a generic format, and because very little security
information falls into this “generic” category, you will find few system
security–related elements on the exam. This is strange, of course, because
network security is among the primary job functions of a server administrator!
The physical security of the server, however, is a major concern of the exam.
Locking down the server room will be dealt with in Chapter 13, “Managing
and Securing the Server Environment.” Some general security topics will
also be considered on a NOS-by-NOS basis in Chapter 7, “Network Operating
Systems.”
Stability
While most desktop PCs are shut down each night, and are used only a few
hours each day, servers are generally on 24/7, and as such they need an
OS that is extremely stable. Moreover, as tens or hundreds of people are
interacting with the server each day, it is critical that the OS be resilient and
able to deal with this constant onslaught of requests without locking up or
giving up.
To help guard the health of these machines, NOS software is often pickier
about what software it allows to run, and which applications and drivers
it will allow you to install. While this helps to insulate the server from prob-
lems caused by bad software, it also means that NOS applications often are
specifically written for the OS, and can be extremely expensive.
Client Prioritization
One last characteristic of a server OS is that it gives priority to client
connections when allocating resources. The primary purpose of a NOS is to
take care of clients, and as such a user at the server console is treated as just
another user, or sometimes even given a lower priority than network users.
Solaris is a Unix-based system, and Unix operating systems are all based on a
server-class platform. Linux is also Unix based. Still, both Linux and Solaris are
often used as a desktop OS, and are as flexible as Windows in that they can be
used for pretty much any role in the enterprise. Start at www.linux.org/dist/
to learn more about the emerging Linux challenge to the established NOS/OS
leaders.
Servers as Hardware
The second definition of a server is one that involves specialized hardware
designed to handle the extreme demands of NOS software and network
users. Companies such as IBM and Compaq produce computers specifi-
cally for these needs, and sell them in separate product lines. Compaq, for
instance, has its extremely popular Proliant series, and Dell sells the Power-
Edge line. At a very general level, servers are essentially just enhanced PCs.
Many managers look at the price of a new Compaq Proliant 1GHz server
and say, “Why are we paying $10,000 for this computer, when we could get
a Compaq PC that is just as fast for $1,000 at Circuit City?”
This is a valid question, because Windows 2000 Advanced Server or Sun’s
Solaris can be installed on a desktop-class PC without any trouble. If you are
in the position of proposing a server purchase to a manager or client, you
should be prepared to explain the reasons behind the higher cost of special-
ized server hardware.
To help you with that explanation, let’s take a closer look at the benefits
a server provides for that extra money:
Expandability
One of the most important characteristics of server-class hardware is that
it is generally built with generous expansion capability. Most servers allow
for far more RAM (often over 4GB), more drive space (most servers have
5–10 drive bays) and more processors—it is hard to get a desktop PC that fits
8 processors because the cases for normal PCs simply do not have room for
that much hardware. Along with all of this additional hardware comes
the need for additional fans and a larger power supply as well, which also
take up room.
Dependability
Server hardware needs to be reliable. Unlike desktop PCs, which are gener-
ally shut down each evening, servers often are expected to run constantly for
weeks or months. The length of time a server has been running, or sometimes
the percentage of time it has been running, is referred to as its uptime. Some
servers prominently display the amount of time they have been up on their
console, while others (Windows, anyone?) tend to hide that information!
Any time a server is not running, the dreaded word downtime is used
to describe the amount of time that it is off. Because servers are critical to
modern networks, and networks are critical to modern organizations, a
server down situation rarely goes unnoticed. E-mail doesn’t work, or users
can’t get to files, or “the Internet doesn’t work,” and calls start flooding into
the help desk.
Along with backup and security, the prevention of downtime is probably
one of the most important jobs of an administrator. Server-class hardware
helps to maximize uptime through higher quality hardware and the ability
to duplicate critical hardware for redundancy.
Quality
One of the reasons servers cost more than desktop PCs is that the pieces used
to build the server are better. No one argues about why a Porsche costs more
than a Yugo, but somehow a lot of people who drive very nice cars find it
difficult to understand why they should pay for quality in their server room
as well.
Server components are manufactured to higher standards, both in terms
of the materials used and the precision of the craftsmanship. Moreover, these
components are tested to ensure that they work well together. This is done
to ensure that a server will remain operating and reliable regardless of the
amount of work required of it. Much of this information is very different
from the way those same resources are discussed in the A+ book, which deals
with the maintenance of “normal” desktop PCs.
Redundancy
Quality components are great, but even the best machines sometimes
fail, and computers are no exception. In order to try to prevent hardware
problems from resulting in immediate downtime, though, most server-class
computers support redundant hardware for key components. This practice
is known as redundancy.
Redundant components can include power supplies, for instance. If a
server has two or more power supplies, both of them can work together to
power the system. However, if one of them fails (or is unplugged), the other
is able to take on an increased load and power the entire system. Other
examples of commonly duplicated hardware include hard drives, drive
controllers, and network cards.
Two items that are not redundant are processors and RAM modules. Even if
you have four processors in a machine, if one processor fails, the server will
go down. The same with RAM. Remember that expandability and redundancy
are different things!
Server-Only Features
Besides just supporting more and better hardware, and offering helpful ser-
vices not available on regular operating systems, modern servers also can be
equipped with a dizzying array of add-on equipment. Although some of these
components are making their way into the desktop computer environment,
they are normally associated with server environments. These include RAID
controllers (standard on most servers), SCSI controllers, an uninterruptible
power supply (UPS), external drive arrays, fax or modem bank hardware, and
tape backup drives. Any of these can be installed into desktop-class machines
as well, but generally their expense and resource requirements dictate that
they be used in server-class machines with server-class OSs.
You may already be in charge of a network, and have your own server(s)
to refer to as you read this book, but if not it might be useful to get an idea
of what these machines are like. Before reading too much further, you may
want to take a look at some of the beasts that the Server+ certification
While this book is being written, HP and Compaq are in the process of
merging, so things may be changing a bit there. For now, though, they
have separate product lines.
Server Roles
Servers must perform a dizzying variety of tasks on the network.
On smaller networks a single machine might perform many or all of these
tasks, and that is perfectly workable because servers are designed to be
good at doing multiple tasks simultaneously. On larger networks, though,
specialization allows each machine to be tailored specifically for the tasks
it is assigned.
This section will sample a few of these tasks for you and give you an
overview of what servers do on a network and how each of these tasks takes
its toll on server resources. Three general types of server roles will be detailed
in this section: security, network, and user. These are loosely grouped, and
some server roles cross over between the categories, so concentrate more on
what they do than where they are grouped. As you read through this section,
keep two questions in mind:
1. What roles does this server play on the network, and how does its
performance impact network users?
2. What type of operating system and hardware should be used to
improve the efficiency of the machine running this task?
Security Roles
Networks have evolved into the storage location for almost all documents
and data in most large and midsize companies. The protection of this data—
both from destruction and from unauthorized access—falls to the server
administrator.
Network security is generally provided by the operating system, and as
we look through each of the following sections, you will be given pointers to
websites that detail how these services are implemented into various server
systems. Because the Server+ exam itself is vendor-neutral, you don’t have
to spend a lot of time studying this stuff, but if you are interested in learn-
ing more about a topic, some of these URLs could come in handy as a
starting point.
Account Management
Most of the following services depend on the ability of the server to deter-
mine one fact—who it is that is trying to use the service. This is generally
accomplished by using one or more servers on the network to store and
authenticate user credentials.
Account management servers generally keep track of (among other
things) two basic pieces of information—the user and the password.
Internet web servers, where thousands of users might visit a server to view
web content. The users do not have individual accounts, and they all log
on through a single generic account. FTP servers also often allow file
retrieval access to anonymous users.
Password
It is almost inconceivable that anyone reading this book needs to have pass-
words explained. If you do, all I can say is you have a bit of work ahead of
you! If a username says “this is who I am,” a password says “and here’s proof
I am who I say I am.” Password security is a critical part of server security,
because if your account passwords—especially administrative passwords —
are discovered, the server and network security are compromised.
Many of the services listed in the authentication section that follows are
specifically designed to protect system passwords. Sadly, even the best secu-
rity software cannot defend passwords scribbled on sticky notes and posted
on a user’s monitor. An effective password strategy needs to include a pass-
word scheme that balances security and usability.
To your dismay, the committee reports back that not only are password
standards not uniform throughout the company, but that many offices
do not have any password policy at all. In attempting to craft a password
strategy, you find the following:
The corporate intranet and FTP servers both support plain text
passwords.
Users often share machines, and many times they use a shared network
account.
This example is representative of the types of questions you might get. All
of the problems and solutions are generic because getting specific would
require mentioning particular authentication methods or require you to per-
form tasks on a particular directory structure. This would require dealing
with a certain vendor, and in most cases the exam is more interested in giving
you logic puzzles than it is testing your knowledge of a particular product.
Authentication
The process of submitting a username/password set and having it tested
against credentials stored in a server database is called authentication. There
are a number of methods of authentication available to a server. Here is a
sampling, arranged roughly in order of least secure to most secure:
Plain Text This is the simplest form of authentication. In plain text
authentication, username and password information is simply sent out
over the network in clear text—standard ASCII code that can be inter-
cepted and read easily. Plain text authentication is highly frowned upon
in secure environments. Scratch that...plain text is highly frowned upon
for any environment.
Many people take security for granted. After all, what’s the big deal? “There
is nothing on my computer worth stealing.” These are the famous last
words of many who have been caught up in an incident where their com-
puter has been broken into.
After I assisted them with some research as well as checking into the log
files with the ISP, we discovered that a hacker had broken into their network
and was using their high-speed connection for illegal activities. Since the
operating system was not providing adequate security, this hacker easily
broke in and then masqueraded as a computer within the network. Never
take your network security for granted.
If credentials are
successfully checked,
Credentials checked the server provides
by the AD server resources
User enters
credentials
Both the Active Directory Server and the SQL Server run copies of the
Active Directory and share the user database as well as the responsibility for
authentication. Common directory services servers include the following:
Microsoft’s Active Directory
Novell’s NetWare Directory Service (NDS)
Sun’s Solstice
The three services listed here are all based on the standards set by the Inter-
national Telecommunication Union (www.itu.int). The ITU’s x.500 directory
specification defines how accounts are created and managed. Because of
this, all three of these systems use similar structures and logic, making it
far simpler to manage multiple systems than ever before.
Networking Roles
Security work is the glamour job in the server world, and Active Directory
and Kerberos (a network security system, developed at MIT, which
verifies that a user is legitimate at login) seem to get all the attention
in the trade rags. Still, in order to make a network function smoothly,
a number of other services also need to be working in the background.
These services assist the network in locating servers, identifying computers,
connecting remote clients, or moving packets from one part of the network
to another.
You won’t be asked to know about the specifics of any one of these
technologies, since each of them is implemented a bit differently by different
server platforms, but you should be familiar with what they do, and the
basics of how they work.
Routing Services
One of the features that a server can offer to the network is to act as a router.
Routing and bridging services allow a server with multiple network interfaces
to link machines on either side. When acting as a router, the system must
build a routing table that shows which machines are available on which
interfaces.
As TCP/IP is by far the most common protocol you will need to deal with
(some NetWare environments still use IPX/SPX), the IP routing protocols are
the most important ones to keep in mind:
RIP The Routing Information Protocol (RIP) is a distance-vector pro-
tocol that enables computers to exchange routing information by means
of periodic routing table updates. RIP updates are sent to neighboring net-
works and RIP information from other routers is returned. The path with
the fewest hops (each router involved in a path is one hop) is used when
sending data.
OSPF Open Shortest Path First Protocol, or OSPF, is an open protocol,
meaning that it is a standard and that it is available for use in the public
domain. OSPF is a link-state routing protocol. Link-state advertisements
(LSAs) are sent to all other routers to allow them to update their routing
tables. These LSAs include changes to the routing table, but the actual
routing table itself is not sent, unlike RIP. OSPF is far more efficient than
RIP on large networks.
Firewall Server
A firewall is essentially a router turned bouncer. Firewalls are placed at the
edge of your network and are used to turn away communications from
unwanted or distrusted clients. Nearly every large corporate or organiza-
tional network now has at least one high-speed connection to the Internet.
While this makes it extremely easy for network clients to access the Web
and e-mail, the process also works in reverse, and networks are vulnerable
to attack from the Web. As such, nearly all firewalls are concerned with the
need to protect a local area network (LAN) from the perils of the Internet.
Figure 1.4 shows a common network configuration and introduces a concept
we need to define.
Firewall Server
Public Network
Gateway to Internet
DMZ
Hub or Switch
Private LAN
Web Server
LAN PC LAN PC
Note the DMZ, or demilitarized zone. The DMZ is the buffer zone
between the Internet and your internal network. It is where any servers that
need to be exposed to the Web should be housed. In this case, a web server
is sitting in the DMZ, and a server running firewall software protects the
intranet. Any requests sent to the web server—including malicious DoS
(denial of service) attacks or Internet worms—will be able to reach that
server unhindered. The same requests or attacks directed toward the internal
network, though, will be intercepted by the firewall, which will be config-
ured to allow only particular information through.
In many cases, it is best to put the web server behind the firewall as well. The
firewall can let through HTTP requests while protecting the server.
Proxy Server
A special kind of Internet access server is a proxy server, which is part
accountant and part traffic cop. Proxy servers are used to funnel all Internet
traffic through a single location, and because of this central point, they can
effectively manage Internet traffic. Notice in Figure 1.5 that clients on the
network direct all requests for Internet information to the proxy server
\\Trantor. The server then checks for a number of things:
1. Is the user allowed Internet access?
Although a standard firewall can perform the first two of these tasks, the
proxy’s ability to cache pages for users makes it invaluable in saving on lim-
ited bandwidth. If three users request a page, only the first user’s request
actually hits the Internet—the other users are then given the page from inside
the cache.
One service that both firewalls and proxy servers provide is logging of
user requests. Administrators can parse the log files for any of the key unac-
ceptable words or phrases, or just check to see who is downloading .mp3s
at work.
Most routing services require little in the way of resources, with extremely
fast network connections being the key to their success. Making sure that
these machines have quality network cards and up-to-date network card
drivers is critical. Also, because the proxy server caches large amounts of
data, it may need a large drive array.
For a great look at how routers, firewalls, and proxy servers work, check
out the excellent film, Warriors of the Net. It is available to view free at
www.warriorsofthe.net, but you will need a fast connection, as the
high-res version is 150MB. It’s worth the time and the wait, though.
Dial-In Server
A dial-in server is essentially a router that has a modem as one of its network
interfaces. The server answers calls coming in from remote clients, authen-
ticates them with the network, and then acts as a conduit, allowing them to
access resources on the network.
In order for this process to work, a dial-in protocol must also be available.
The most common of these are Point-to-Point Protocol (PPP) and the less-
used SLIP (Serial Line Internet Protocol). PPP is newer, and is generally more
efficient because it has error-checking mechanisms built into it. For more
information on how PPP and SLIP differ, check out www.ccsi.com/
survival-kit/slip-vs-ppp.html for one ISP’s explanation of the two.
SLIP is not used much anymore, but it is good to at least recognize the acro-
nym, just in case.
The hardware requirements of a dial-in server can be quite specialized
because the task of providing many—often dozens—of modem connections
is beyond the comm port capabilities of a standard machine. Specialized
expansion boards by companies like Digi (www.digi.com) allow a server to
support this higher hardware level.
In Figure 1.6, a Windows 2000 Server is using the Routing and Remote
Access Service (RAS) to support dial-in clients. The clients authenticate to a
Windows domain controller on the network, and are then able to connect for
e-mail and file access.
RAS Server
requests resources
Remote Server transfers RAS Server for the client
Client the data out to
the remote client
File server returns
the data to the server
File Server
VPN Server
A virtual private network (VPN) is similar to a dial-in connection in that it
allows users to access their network remotely. Unlike standard dial-in,
though, a VPN connection involves a two-step process:
1. Users attach to the public Internet using a dial-up or by configuring
their machine to use a high-speed connection.
Once an Internet link is established, users can start a VPN client to
make a connection across the Internet to a VPN server on their own
network. This server also needs to have a separate Internet connec-
tion. This process involves creating a secure tunnel connection through
the existing connection established in step 1. This secure connection is
called a Point-to-Point Tunnel Protocol (PPTP) connection.
The VPN connection is encrypted, and because all communication is
encapsulated within the VPN protocol, users can access network resources
through the VPN that they would otherwise be unable to see using standard
TCP/IP connectivity. Figure 1.7 shows a common VPN configuration. Note
how similar this process is to the one shown in Figure 1.6.
VPN Server
requests resources
Remote Server transfers RAS Server for the client
Client the data out to
the remote client.
File server returns
the data to the server
File Server
TCP/IP Services
TCP/IP has been mentioned briefly already this chapter and as you go
through this book you will continue to hear about it. As earlier mentioned,
this is essentially the only major protocol standard that you can depend on
any server to support. Because it is everywhere, certain server functions
needed by TCP/IP networking must be included on nearly all networks.
We will just mention these here, as Chapter 8, “TCP/IP,” deals in-depth
with understanding and configuring TCP/IP networking.
DHCP Server
The Dynamic Host Configuration Protocol is used to simplify TCP/IP con-
figuration on network clients. DHCP servers store the information needed to
bring a TCP/IP client onto the network, and when a client first starts up they
contact the server to obtain an address, gateway, subnet mask, and DNS
server, among other things.
If these terms are not already familiar to you, Chapter 7 alone may not be
enough! In that case, Andrew Blank’s TCP/IP JumpStart (Sybex, 2000) is a
good reference. Vendor-specific TCP/IP books are also available, but the
JumpStart book is nice because it maintains the vendor neutrality that
CompTIA espouses.
The DHCP server itself can run on any platform, and clients from
multiple operating systems can use a single DHCP server. It is important to
remember, though, that DHCP requests are done through network broad-
casts, so you generally need either a DHCP server or a DHCP relay agent on
each network segment. More on relay agents in Chapter8!
Name Server
Computers talk to each other quite happily using numbers, as digital infor-
mation all boils down to ones and zeros eventually. Human beings, on the
other hand, generally have an easier time with information presented to
them in the form of words and text characters. Because of this, name servers
allow both people and machines to have their own way. There are two pri-
mary types of name servers to keep in mind:
DNS Server The Domain Naming System (DNS) has been in use for
nearly two decades now, and is the worldwide standard for identifying
computers on TCP/IP networks. DNS servers resolve TCP/IP host
names to IP addresses. In Figure 1.8, the host Client1 requests access to
server1.sybex.com. The name is resolved by the DNS server, and Client1
can start the connection.
WINS Server Figure 1.8 also demonstrates the functionality of a WINS
server. WINS stands for Windows Internet Naming Service, and the “Win-
dows” part of that is a pretty good clue that this is not a vendor-neutral
service. Microsoft has been using its own naming structure—NetBIOS
naming—since the days of DOS, and a WINS server is used to support this
in a TCP/IP environment. Client1 can also access Server2, but does so
through a WINS server rather than a DNS server.
server1.sybex.com
Client 1
Server 2
A configuration like the one in Figure 1.8, where both DNS and WINS servers
are in use, is common. These services can coexist, and even can help each
other out on occasion (see Chapter 7 for more on that).
Management Server
TCP/IP also provides a protocol specifically designed for network manage-
ment functions. The aptly named Simple Network Management Protocol
(SNMP) is used to allow a server on the network to collect information
about other devices and issue commands in return. We won’t spend a lot of
time on SNMP, as it is doubtful you will find detailed SNMP questions on
the exam. Even so, www.snmp.org/protocol/ is a good place to go for an
overview of what SNMP is about.
In most cases, naming and management services are almost unnoticeable
in terms of their effect on the server. Even a very small server can support
thousands of clients with no problems. Again, the key is having sufficient
bandwidth to the server.
User Services
Finally we arrive at the services that users can see and interact with. When
most people talk about a server, they are concerned with the tasks discussed
below. This does not, of course, mean that they are the most important ser-
vices. Like a quarterback on a football team, user services get all the press
and most of the resources. The underlying network services listed above,
though, are as important and underappreciated as offensive linemen!
Notice that when services are started, the result is significant resource
usage. You should be able to identify how to plan for each of the following
types of servers by planning to boost critical resource needs.
File Server
The classic task of a network server is to store information that needs to be
shared among multiple users; in this role it is known as a file server. To suc-
cessfully store and share information, the server normally has to have a few
different elements in place. First, some sort of security needs to be present to
protect the files on the server. Different network operating systems handle
security in very different ways, but in all cases the server needs to be able to
ensure that users do not have access to files they should not see. Servers can
also make more subtle distinctions, such as allowing users to read a file but
not modify it.
The requirements for a file server are heavily weighted toward its hard
drives. Server hardware often comes with multiple drives, because file servers
are expected to store enormous amounts of data. File servers also need fast
drives—Compaq uses 10,000RPM drives in its servers. The drive controller
is also important, as is the network bandwidth available to the server. In
Chapter 4, “Storage Devices,” we will examine server drive configuration,
and you will notice that SCSI hardware is the overwhelming choice for
servers. This is because SCSI is faster and more expandable than IDE/EIDE.
Print Server
If you were interning to be a server on a network, it is likely that you would
start as a print server. Print servers require very little in the way of resources,
outside of requiring sufficient drive space to store files submitted for print-
ing. Even this is a relatively small requirement, because print jobs are gen-
erally stored only until they are printed, at which time they are deleted. The
process of network printing is enumerated below. Two terms you should be
familiar with when discussing printing are queue and spool:
Queue A queue is a list of documents waiting to be printed. The term
also describes the location where these documents are held.
Spool Spooling is the process of writing a document into the queue. The
queue is often called the spool file in fact. Spooling allows a print job to
be sent to the server even if the printer is busy, thereby freeing up the client
to continue on other tasks.
1. The client chooses to print a document. Part of this process involves
choosing a printer (or accepting the default printer).
2. The client’s printer driver is used to format the document for printing
on the particular printer chosen.
3. The document is submitted as a job to a local print queue on the client.
This is optional, but as it immediately frees up the client PC to work
on other tasks, local spooling is pretty standard.
4. The job is then sent from the local print spool to the network print
server. This server has another print queue, and the document is
placed here.
5. The print job is placed in line with the jobs of other users, and when
the printer is prepared to print it, the job is spooled out to the printer.
6. The printer produces the document, and reports back to the server,
which deletes the job from its queue.
It is possible to tell the print server to keep all print jobs rather than delete
them. While this is good for seeing what has printed, it can eat up drive space
and is not normally recommended.
A print server’s primary task is to interface with machines that are pain-
fully slow by computer standards—even the fastest printers move at a glacial
pace compared to PC speeds. Because of this, print servers require minimal
hardware, and print services can often be combined with other tasks rather
than having a dedicated print server.
Application Server
File and print servers are in many ways the backbone of a network—
application servers are its brain. App servers are machines running server
processes that perform tasks on behalf of users, or interact with client
machines in the completion of tasks.
There are a number of different application servers, but three of the most
common are these:
Database server
E-mail server
Active web server
The key to a server being classified as an application server rather than a
file server has to do with how much work the server does on the data before
sending it to the client. A great example of this can be found in Microsoft’s
database family.
Microsoft Access is a database program that can share a database among
multiple users. Because of this, the Access data file itself can be placed on a
server and made available to network users. At that point, the server is shar-
ing out a database, but it is not an application server. The reason for this is
that if a client requests information from the database, the process shown in
Figure 1.9 is initiated.
Server
Notice that the client needs only a specific set of data, yet the server sends
the entire database across the network to the client, which is then responsible
for sorting out what it wants and discarding the rest of the information. This
is inefficient in two critical ways:
1. Time and bandwidth are wasted transferring unneeded rows of data.
Client-Server Architecture
The solution to this problem is the use of a client-server architecture, such
as the one available in Microsoft’s SQL Server. Client-server applications are
computer programs that are specifically designed to use the processing
power of both the server and the client machines in the completion of their
tasks. Generally this means that the client makes an initial request to the
server, and the server then does some initial processing on the request. The
result of that processing is then returned to the client, or to another machine
for additional work to be done with it.
If more than just a single client and server are involved, this is called an
“n-tier” architecture; n stands for the number of machines used in the
processing, meaning you could have a client and two servers involved
in the transaction, and it would be a “3-tier” design.
Figure 1.10 shows the same request being issued by the client as in
Figure 1.9, but with a significantly different response.
Do you see how this time the server has actually looked at what the client
needs and has preselected the data? By doing this, both the network and
the client are less heavily taxed, and the server is able to justify its expensive
hardware by actually doing something.
Because a large database server may be doing tasks for dozens—or even
hundreds—of users all at once, the hardware requirements on an application
server can be extreme. Moreover, because app servers do a lot of “thinking,”
faster processors or multiple processors can be crucial.
Internet Server
The last server type we will consider is one intended to deal specifically with
web-related or other Internet-related client requests. A number of Internet
services can be provided by network servers, but probably the most common
of these are the web server, the mail server, and the FTP server. Increasingly,
though, streaming media servers, online database servers, and Internet-
specific application servers are coming into use.
Web Server
The World Wide Web started out as a collection of HTML (HyperText
Markup Language) pages stored on Internet servers. Over the past few years,
though, web servers have gotten progressively more complex, and HTML
has evolved from a static file server technology to an active client-server
model.
The interaction between web servers and web clients (browsers) has
now become quite complex. Java, ActiveX, server-side scripting, and
database connectivity through the Web have all increased the power
and potential of web servers. Many enterprises now find that their web
servers are an integral part of both daily business environment through
intranets and web-based applications.
FTP Server
FTP servers, on the other hand, remain very much the same today as
they were 10 years ago. An FTP server is just a file server for the Internet,
operating over the FTP protocol. Clients connect to the server, authenticate,
and add (PUT) or retrieve (GET) files just as you would on any file server.
The hardware requirements for web servers are fluid, as these servers can
support a few concurrent (simultaneous) connections or a few thousand. As
your expectation of the number of people using the site rises, so should your
hardware levels.
Mail Server
There are a number of different e-mail server options available for use with
your network. Most NOS vendors have e-mail packages available for
their server operating systems, and a number of freeware or shareware
e-mail servers are in use as well.
Besides providing for the critical ability to send and receive messages,
e-mail servers can filter out inappropriate messages, provide protection
from e-mail borne viruses attempting to enter the system, and act as a
repository of information and communication data for the organization.
Because they do so much more than just shuffle mail around, these applica-
tions are often referred to as groupware rather than just as e-mail servers.
Summary
In this chapter, we have discussed what servers are and how to identify
server-class hardware and software. Knowing how to tell what hardware
components are appropriate, and which operating systems are designed for
server work, is critical when you are choosing a new server or deciding
whether an existing box is up to a new task.
We also looked at a sampling of the jobs that servers do, and examined
what types of hardware are needed for certain tasks. If you haven’t already,
spend a bit of time browsing the Internet links associated with the topics in
this chapter. There is a lot of good information there, and Web data hunting
is among the most important skills you will need to develop as a server
admin!
Throughout the rest of the book, you will take a Chapter Review Test.
Each test will consist of 20 questions designed to quiz you on the objectives
and content that you reviewed within the chapter. As stated, this chapter
does not cover any particular exam objectives—but, to keep your test-taking
skills sharp, we included 20 questions to reinforce some of the material
you’ve just reviewed. Much like the Assessment Test you took in the Intro-
duction, this test will help you target areas you may need to refresh before
forging ahead with the exam preparation. Good luck!
Exam Essentials
Know what a server is. Servers can be hardware or software that
provides a service for other devices connected to the network.
Know the characteristics of server operating systems. This includes
scalability, security, stability, and client prioritization.
Know the benefits of using server hardware. Expandability, depend-
ability, quality, and redundancy are the benefits of using server hardware
over server software.
Be familiar with common server roles. Servers can perform the
following roles within a network: security (account management,
authentication) and directory services.
Know the major routing protocols. RIP and OSPF are the main routing
protocols used today.
Know what a proxy server is. Proxy servers perform Internet tasks on
behalf of the computers on the network.
Know the main types of remote access servers. This includes dial-in
servers and VPN servers.
Be familiar with the different types of user services that a server can
perform. Servers can fulfill the following user services: file server, print
server, application server, Internet server, web server, FTP server, and
e-mail server.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. Which technology allows a number of servers to share resources and
create a single virtual server out of a number of machines?
A. Failover
B. Clustering
C. Scalability
D. Mirroring
C. VPN server
A. It is less expensive.
B. Multiprocessor support.
B. Backup
C. Uptime
D. Downtime
B. Plain text
C. Smart cards
A. Microsoft
B. CompTIA
C. ITU
D. OSI
A. RIP
B. DMZ
C. OSPF
D. SLIP
A. DNS resolution
B. Authentication
D. Firewall access
A. Application
B. File
C. Naming
D. Remote Access
B. Security
C. Ease of administration
D. Stability
B. Servers
A. Biometrics
B. Encryption
C. Write-protect tabs
D. Plain text
A. Distance vector
B. Link state
C. NetBIOS
D. IPX/SPX
A. Database server
B. E-mail server
C. Web server
D. TCP/IP server
19. C. A queue is a location where documents are kept in order until they
can be printed.
20. D. TCP/IP is not a type of application server. It is a network protocol.
The two subobjectives not covered in this chapter are “Verify hardware com-
patibility with operating system” and “Verify SCSI ID configuration and ter-
mination.” SCSI is covered in detail in Chapter 4, “Storage Devices,” so we
will leave the discussion of termination and SCSI IDs until then, and operating
systems and their individual quirks will be considered in Chapter 7, “Network
Operating Systems.”
server OS, and what database you purchase. Moreover, making a declar-
ative such as “back up the entire server each night” implies that other
options—such as differential or incremental backups—have been auto-
matically removed from consideration. Further discussions on backups
will be covered in Chapter 14, “Backup.” A well-constructed goal should
be one that encourages ideas, not blocks them.
As that first goal from above seems as good an example as any for dis-
cussing planning, we will use this project as our example throughout this
chapter. This will allow you to see what some of the issues that might come
up in an actual install might be.
True story: I was involved in upgrading an office of 150 people from ccMail
to Exchange/Outlook a couple of years back. We had trained the users on
how the new system would work, tested all the server and workstation
software for compatibility, and the entire rollout crew was confident that
the new system would go in without a hitch. People went home on Friday
night, we migrated them over the weekend, and when they came in Monday
morning, all their data had been transferred to Outlook and everything
worked great.
Unfortunately, that day was also the day that the “Melissa” e-mail virus
debuted, and it hit our location at about 10 A.M. The entire network was com-
pletely brought to its knees within minutes. It took days to get things back to
normal, and by then most users were begging for their ccMail back. They
didn’t necessarily know why the system had failed, and they didn’t really
care…they just remembered that this hadn’t ever happened with the previ-
ous system. One oversight, in other words, can make you and your work
look really bad, and it takes a long time to overcome a bad first impression.
Once you have talked to users about the system, you may want to go back
and modify your original project goal. Perhaps remote access to the server is
one of the critical areas driving the upgrade. In such a case, you may modify
the plan to read like this:
Purchase and install a database server for the company’s new SQL-based
accounting application, and upgrade the company web server to allow it
to host software that provides a web interface to the new database.
Part of the business case for upgrading the accounting server, in other
words, was to provide web access to data. This will be a key area that those
judging the success or failure of the project will examine, so you will want to
make sure that the hardware and software on the web server are up to the task.
in the last step. The administrators in the example we are using have come
up with the list in Table 2.1 (costs are as of Fall 2001, are not necessarily the
best prices available, and are used for example only).
All too often, training funds are allocated as part of the budget for a project,
but are then among the first things that managers cut out if the project is
running over budget. Fight to keep these dollars, because trained users are
happier, less confused users, and happier users make for happier administra-
tors. Remember that it won’t be the manager going desk-to-desk to explain
how the new software does this or that differently.
Once you have a good idea of what needs to be done, and how much it
should cost, you need to plan out this last cost element—as measured in
time, not money. The implementation of a large project like ours will require
a good deal of human effort. This includes administrators testing and
implementing the new configuration and help desk people dealing with addi-
tional support needs and any downtime users experience when the upgrade
process is underway. It also includes time spent in training sessions and, if
the project is complex, might include allocating funds for a consultant to
assist in crucial phases of the upgrade.
The term consultant covers a broad range of job descriptions. Some
consultants work for a particular company, and are highly specialized.
It is likely, for instance, that the software company that makes our new
SQL-based accounting package has employees who know the app well and
are expert resources on issues regarding that particular software. Other
consultants roam the landscape.
Either of these types of consultants can offer an excellent way of bringing
additional expertise onto your project. Hiring a consultant is expensive, of
course, but an experienced engineer might help you avoid problems, help
optimize the solution, and in the end save you time and money. For this
project, we will be bringing in a consultant from the software company to
review our plan and will be bringing that consultant back for the weekend of
the actual migration. To be safe, we have budgeted for four days, though
only three should be needed. As it is likely that the total bill for this assistance
will be $1,000–$1,500 per day, the consultant will be kept for the fourth day
only if absolutely needed.
The final estimate for the upgrade comes in at around $100,000 when
training, consulting, hardware, and software are all added up. This will
likely cause some serious questions to be asked about trimming costs. If
you need to reduce the cost of the project, try to cut a bit from everywhere
rather than just cutting out the consultant, the training program, or the web
server upgrade. If the company wants an $80,000 upgrade, find a way to
scale back evenly.
In the real world, there are times when the decision about what to cut is not
left up to the engineers. In these cases, you sometimes just have to deal with
the cuts where they occur, and do your best to minimize their impact. If man-
agement makes unwise cuts, that can mean that if you don’t deal with the
potential problems then, you look bad, and your users suffer. The manager
who caused this will probably be too busy golfing to even notice that there is
anything wrong.
project as a “job,” what that consultant is saying is, “I will do this project in
this many hours for this amount of money.” Because the work to be done
and the amount to be paid are fixed, the consultant must have a good idea
of how much time the project will take. If she estimates too high, her fee will
be too high, and someone else will get the bid. If she bids too low, she will get
the job but may spend a week doing a project bid out at a cost of two days.
That’s three days of free labor, which hurts.
Even if you are on salary, much the same process comes into play. If you
are too aggressive with the schedule, you might say, “We can do all that in
one day. No problem.” Of course you won’t actually be able to do that,
and you will end up backtracking, which can cause serious problems if other
people are planning off your schedule. Similarly, trying to be too safe can
make it seem like you are not interested in the project or not confident in
your ability to complete it.
Project Managers
All of this may seem confusing. You may even be thinking that you have no
interest in budgets or planning, and that isn’t what you got into computers
to do. If you are lucky, there is someone wandering around your office with
a title like “Project Manager” who deals with scheduling, budgeting, and all
the rest. If so, let him deal with all of this. Believe it or not, he might have
chosen to make that his life’s work. He will probably ask you a few ques-
tions, more than likely will require a list of project requirements as shown
above, and will later return with a spreadsheet that details the entire project
plan. Oh, and this probably goes without saying, but be nice to him—your
fate is in his hands!
If you don’t have this option, though, the following resources might come
in handy when planning the big upgrade:
From Serf to Surfer: Becoming a Network Consultant, by Matthew
Strebe, Marc Bragg, and Steven Klovanish (Sybex, 2000). This book
deals with issues related to working as a consultant, such as planning,
pricing, and all the other stuff we have been talking about.
Mastering Microsoft Project 2000, by Gini Courter and Annette
Marquis (Sybex, 2000). Project 2000 is reputed to be great software
for project management, and this book deals with how to use the
software and the management logic behind the process.
Playing Politics
As you spend more time working as an engineer, you will learn (or you may
have learned already) that the most complex aspects of computer systems are
the people who use them.
Because of this, one step you will need to take is to ensure that your
project plan is acceptable to all of the various groups and factions within
the organization. Without a doubt, the key element to surviving this part
of the project is this:
Get a sponsor!
If the project is going to get done, you are going to need someone who
has the power to get things done. Try to identify a department or group of
users who will specifically gain from the project, and co-opt them into the
process. Going back to our example project, it seems that the accounting
department manager, or maybe even the chief financial officer, might be a
good place to look.
The reason you want to specifically target them, though, is that this
engages them personally in the success or failure of your project. If problems
come up with the budget, time conflicts, or people getting cold feet about
moving to the new system, having these people with you can make all the dif-
ference. They will be able to manipulate the system in ways we cannot even
imagine, find money in places we would never think to look, and enforce
acceptance on the troops.
Moreover, having a sponsor lends authority to the project, and often just
the sponsor’s name on a proposal will quell dissent and speed authorization.
Crazy, but true.
Even if you do have someone to help calm the rough waters, it is also
important to continually keep people up to date on what you are planning,
and how it will affect them. Find out what concerns people have about
the project, and try to implement steps into your rollout that protect against
problems that will have a significant impact on user acceptance of the
project.
of them has already tried what you are planning to try, and can give
you information on how it works.
Newsgroups
You will see all of these again, especially when we look at the objective
dealing with troubleshooting. For now, though, just remember that you
should check for compatibility when you plan, double-check before you
actually buy, and watch for problems when you implement.
Most software companies will allow you to test their software for compatibil-
ity using a trial version that has limited functionality or timed deactivation.
Certain hardware vendors also let you test out their devices, but this is gen-
erally only an option if you are considering buying a large quantity. Still, it
never hurts to ask.
Once you are certain that all the pieces of your project will fit together, it
is time to order your hardware and software, and start implementing your
project plan.
switch, and monitor for the server. You may also have to budget for having
network cabling, routers, etc. if this is a new server location.
Once you have found space for your server, and identified UPS, KVM, and net-
work hookups, it is important to protect them from being stolen by someone
else. Months often pass between planning and implementation, and server
room space is always at a premium. Find some way of identifying the outlets,
network ports, and KVM ports that you will be using!
When setting up the server, make sure you have some room. These are not
small machines, and they generally come in extremely large, well-packed
boxes. Having someone else to help lift is also a good idea.
finish adding components, move the server into its new home in the server
room, and connect all necessary cabling.
Most servers come pre-assembled, in that their motherboards, power
supplies, and other standard components are all in place when you get them.
Most also have built-in or included network and video cards. Even so, it
is important to go through the server and check that cables are tightened
properly, cards are properly seated, and everything is in order.
It is interesting that CompTIA has chosen to mention these elements
under its first objective, “Conduct pre-installation planning activities,”
because they are referenced again in their own topics later. As such, this
book will follow a similar tack. Chapters 3 through 6 will talk extensively
about the different types of hardware available to you, and how to determine
what is the best option for your needs. In this chapter, we will concentrate
instead on just putting things together. That process is actually rather
straightforward, and includes the following steps:
Add additional RAM, processors, or other resources to the server
as needed.
Insert hard disks, extra CD drives, or backup drives.
Mount the server into its rack.
Cut and crimp cabling to the server.
Connect the server to a UPS.
Verify SCSI settings.
Install external devices.
As you are working to put the machine together, be certain to take a
few general precautions. First off, make certain that you do not have power
active to the server when you are installing devices. Obviously you don’t
want to try and do this while the server is on, but even just having active
current going to the machine can pose a danger to the server—and to you.
Also, using an ESD static strap to discharge static electricity is an excellent
idea. ESD, or electrostatic discharge, is the leading reason for component
failure during installation. A static strap is a wrist strap that connects between
your wrist and a ground. The strap contains a resistor that will slowly drain
any static charges out of your body and away from the computer. ESD and
ESD static wrist straps are covered in detail in Chapter 15, “Disaster
Recovery.” The proper use of a static strap is shown in Figure 2.1.
This is the sort of thing that can cause problems for experienced server
administrators. Many administrators have developed their own installation
habits over time, and may no longer use a static strap when installing. Just
remember that for the test you should do things by the book.
Make certain that the RAM and processor are seated properly, and
that the fan and heat sink for each processor are properly attached. Also
assure that the fan is plugged in.
As processors have gotten faster, the amount of heat they generate has
increased. Because of that, the processor fan and heat sink are critical to the
health of your system. Make sure that the processor fan is working by briefly
powering up the computer after installing the fan. You don’t need disk drives
or any of that—just make sure the blades are spinning properly, and power
back down.
Although it is possible to install drives into internal bays, many servers use
external bays into which drives are inserted. In such cases, the drives are
placed into a protective shell called a tray. This tray then fits into the bay on
the server.
In order to ensure that drives are working, you should check the inter-
nal cabling leading to the drive bay, verify that the drive trays you have
are correct for the server, and make sure that the drives themselves are
properly secured within their trays. You will also need to carefully insert
each of the trays into its bay according to vendor instructions, as in
Figure 2.3.
Remember to purchase the trays! Each drive purchased will need its own
drive tray, and these do not come with the server. It’s amazing how many
installations are held up waiting for trays that no one thought about until
it was too late.
Rack Form Factor A rack form server (see Figure 2.5) and a tower
server can be identical in their components or their capabilities. What is
different about them, though, is that they are specifically designed to be
placed inside space-optimizing storage cases. These server racks allow
servers and other network components to be stacked on top of each other,
making it easy to store them and also facilitating keyboard and monitor
sharing and other conveniences.
For a look at different types of servers and a chance to go under the hood and
look around, check out Dell’s interactive 3-D demos of selected servers. Go to
www.dell.com/us/en/biz/products/model_pedge_pedge_6450.htm to
see the PowerEdge 6450, but other models can be checked out as well. Just hit
the View 3D Demo button and wait for it to install the applet.
Many companies are now offering rack-optimized servers that are smaller and
easier to handle. They are also, of course, more expensive and more difficult
to expand, and they generally lack the drive and processor capacity of larger
rack servers or towers.
Some servers fit nicely into a standard rack, while other companies—
Compaq and Sun come quickly to mind—build their servers to their own
specs, and you usually need to buy a rack from them that is specifically
designed for their servers. The problem is generally not the width of the case,
as all cases have standardized on a 19 IN. width. Rather, the issue is that
these servers are so heavy and deep that they will topple any case that does
not have proper support.
KVM Switch
Generally just called a KVM (keyboard/video/mouse), this switch is a
component that allows multiple computers to be controlled using a single
set of input-output devices. One workspace can be set up near the rack, and
the KVM allows users to switch back and forth between servers.
One limitation of a KVM is that only one machine at a time can be managed.
To manage servers simultaneously, separate inputs and outputs must be
maintained.
UPS
Every server should have a UPS (uninterruptible power supply) serving it.
This device ensures that if the server room loses power, the server will have
time to properly shut down, rather than just shutting off. Chapter 13 deals
more with how to plan for UPS and KVM coverage in your environment. For
now, just make sure that a connection to the UPS is available, and that it is
plugged in.
Backup Drive
Backup will be covered in more detail in Chapter 14, but let’s take an
overview approach here to get your feet wet, so to speak. As you are setting
up the new server you should make sure that all of the drive space you are
adding to the network can be backed up by your existing backup strategy—
you will do this with a backup drive. A backup drive is nothing more than
a device that creates a copy of your data on a removable device such as a
magnetic tape or an optical disk (CD or DVD). If your current backup strat-
egy is not adequate, you may need to purchase another backup drive and add
it to the new server.
Backup drives generally will need a shelf in one of the server racks, and the
drive should be easily accessible, making the changing of tapes as convenient
as possible.
Backup drives usually run off a SCSI controller, although IDE/EIDE and
even floppy controller–based backup drives are available. Because of this,
the installation of the backup drive dovetails nicely with our next item:
checking the SCSI settings.
Termination
SCSI termination must occur at the beginning and end of the SCSI chain.
Remember that SCSI devices can be external or internal. Proper termination
also includes using the appropriate terminator for the SCSI type that you
are using.
5. Install software.
I want to discuss the steps I take when installing a new server. These are
not the steps advocated by the exam objectives; however, in my day-to-day
real life application, they work. So, I advise following the previous steps listed
for the purposes of the taking the exam but consider the steps I’m listing here
as a real-world reference.
4. Install software.
If your staging area has a network connection to the server room, it may
even be best to finish the entire configuration—operating system, applica-
tions, everything—on the workbench. There is generally no real advantage
to hurrying the machine into the server room, and the disadvantages include
sitting in the cold and having to enter the key code for the server room 20+
times a day while testing the system.
Regardless of when you decide to do the power-on test for your server, it
is the first real test of your skills. Hopefully all the hardware is compatible
and has been configured properly. The server should come up with BIOS
readings about the RAM and drive configuration, and it is likely that on first
boot you will be required to enter the BIOS configuration utility, so that
boards can be configured and hardware levels recorded.
Summary
I n this chapter, you learned about how to plan a project, and about
the different types of server roles. You also were given a quick overview
of the technologies and tasks that will be considered in the next few chapters.
If you have time, browsing some of the Internet sites listed in this chapter
can be invaluable in helping you to learn about the state of the industry.
They will also give you some idea of what is similar, and what is different,
among the companies producing servers for PC networks.
Exam Essentials
Know how to plan an install. Know how to define a project goal,
examine current configurations, budget a project, and set a timeline
for completion.
Know how to set a time frame for completion of a project. This
includes prioritizing, deciding which tasks are independent, setting
incremental delivery times, and setting time aside for dealing with
problems.
Know the three elements in verifying an installation plan. This includes
people, hardware, and software.
Know what the HCL is. Microsoft uses a Hardware Compatibility List
to ensure that hardware will run as expected with their operating systems.
Know what a UPS is. A UPS or uninterruptible power supply allows
you to run a server for a short period of time in the event of an AC power
failure.
Know what KVM switches are and how they are used. A KVM (or
keyboard/video/mouse) allows you to use one monitor, keyboard, and
mouse to control several computers.
Know the term ESD. Be able to explain what ESD is and how to prevent it.
Know the server form factors. Know the common server form factors
of towers and rack mount servers.
Key Terms
B efore you take the exam, be certain you are familiar with the
following terms:
Review Questions
1. Which of the following is not a step in planning an installation?
A. Detailed
B. Focused
C. Perform a backup.
C. Resources
D. Downtime caused by the upgrade
B. Creating a list
C. Training
D. Informing users of the upgrade
B. Software
C. Delivery methods
B. Hardware
C. Software
D. Installation Instructions
A. Keyboard/video/mouse
B. Keyboard/video/monitor
C. Keylock/video/monitor
D. Keyboard/video/machine
B. Electrostatic discharge
A. A model of a computer
C. A class of server
D. A component within a server
D. It is more affordable.
B. Incorrect SCSI ID
C. Incorrect termination
Please see Chapters 2, 7, 9, 10, 11, and 12 for additional coverage of the “Add
Processors” objective.
The Motherboard
T he motherboard is the backbone of a computer, providing connectivity
between all the components of the computer. All computer components plug
into the motherboard in one way or another. With increased demand for
computer power, designers have had to adapt motherboards accordingly.
New processors, bus speeds, RAM types, data transfer speeds, and com-
ponents have together pushed the evolution of the motherboard forward
at a steady pace.
Form Factors
Another motherboard classification is the form factor. Essentially, form
factors define the layout of components on the actual motherboard. There
are three broad categories of form factors: AT/Baby AT, ATX, and NLX.
The AT was the original IBM form factor design, on which the processor,
memory, and expansion slots were all arranged in a straight line. This posed
a problem for full-length expansion cards because the height of the processor
interfered with proper card installation. In addition, heat dissipation from the
processor sometimes caused problems for the expansion card. The Baby AT
was a smaller version of the AT with newer, smaller components. It was a more
compact board, but had the same drawbacks as the AT. In a home PC this is
rarely an issue, but in the server world many expansion cards are full-length.
Traditionally, servers are not designed around the Baby AT form factor.
The ATX component layout is different from the AT. In the ATX form
factor, the processor and memory are arranged at a right angle to the expan-
sion slots, allowing room for the use of full-length expansion cards. In
the newer computers, the combined height of the processor, heat sink, and
cooling fan make it impossible to insert full-length cards in any other form
factor, and most new computers (including servers) are built around the
ATX form factor. New ATX motherboards also offer advanced power
management features that make them even more attractive to computer
builders. For example, ATX motherboards offer a soft shutdown option,
allowing the operating system to completely power down the computer
without the user’s having to press the power switch.
NLX has been a form factor in use with desktops for quite some time. It is
a compact form factor, often referred to as a “low-profile application.” NLX
motherboards are easily distinguished by the riser card to which the expansion
cards connect. The riser card allows from two to four expansion cards to be
plugged in. These expansion cards sit parallel to the motherboard.
Servers with this form factor offer power similar to the larger traditional
servers, but in the size of a VCR. The obvious benefit of NLX is that the bulk
of a traditional server is reduced to a space-saving smaller server. Addition-
ally, servers assembled in a rack mount case can be secured to a rack, which
can itself be secured to the floor, providing better equipment safety.
Beyond these three principal categories of form factors, some companies
have created their own motherboard layout. For the manufacturer, this
proprietary design allows for specific and custom creation of servers. For
the end user or technician, however, it can be a nightmare, often requiring
special training by the manufacturer before the custom equipment can be
serviced. There is also the possible difficulty of locating the specialty parts.
Components of a Motherboard
Regardless of the form factor, motherboards all contain similar essential
components, including processor slots, expansion busses, RAM banks, inte-
grated controllers (either IDE or SCSI), power connectors, and peripheral
connectors. It is these essential components that work together to provide
the connectivity and communication within the computer. The diagram in
Figure 3.1 is a structural overview of a typical server motherboard.
A B C D E
F
N
H
L
K J I
Expansion Busses
Expansion busses provide a means of adding additional components to
your computer, such as a video card, network card, SCSI controller, RAID
controller, or others. Integrated motherboards have less need for numerous
expansion busses than non-integrated boards. In the history of computers,
eight major expansion busses have been developed, but only three of these
busses are commonly used in modern servers: AGP, PCI, and ISA.
AGP
The Accelerated Graphics Port (AGP) bus is for advanced video. Only
one expansion card, the video card, is made for an AGP port or interface.
The AGP port is easily identified by its brown color and offset alignment
(as compared with the other expansion bus slots). Motherboards contain
only one APG port.
The first AGP release was a 64-bit data bus that ran at 33MHz (a measure
of the speed of information flow). New releases of AGP include 4XAGP,
which runs at 133MHz—four times the standard 33MHz! Although numer-
ous servers on the market include AGP video, either as an expansion port or
an on-board video card, it is unlikely that a dedicated server would need the
advantage of advanced video—how often do you play graphics-intensive
games on a server? Some “gaming servers” provide connectivity for other
gaming computers, but the server is rarely used as an actual gaming machine.
The risk of corruption or configuration problems outweighs the benefits
of the AGP bus.
PCI
First released at the inception of Pentium-generation processors, Peripheral
Component Interconnect (PCI) cards are the major expansion card type
in use today. PCI is popular due to its transfer speeds (32- or 64-bit busses)
and ease of installation. PCI also supports bus mastering (a means of
allowing a device such as a hard disk to communicate directly with another
device without the input of the CPU) and speeds up to 66MHz. Installation
and configuration are dramatically easier than for earlier busses, with
resources for the card being determined by either the operating system or
ISA
Industry Standard Architecture (ISA) is the oldest of the three types of
expansion bus. This bus preceded PCI and was extremely popular in its
time. Today it has been nearly phased out, but some motherboards still
have one or two ISA slots for use with older expansion cards. Most new
motherboards, however, have no ISA slots.
This bus is 16-bit and allows transfer speeds of 8MHz, with some models
running in turbo mode at 10MHz. ISA, at times, is difficult to configure
because it requires jumpers and/or DIP switches to be manually set. The
technician must be aware of which resources are in use and which are
available. Should the technician configure the new expansion card with
the IRQ or DMA of another device, a conflict occurs. In days gone by, it
was not uncommon to have the mouse freeze on your 486 computer when
you tried to dial up your modem because the mouse and modem were
often misconfigured with the same resources. The ramifications of this
particular conflict were minimal, but more serious situations can occur—
for example, conflict with a hard disk controller.
Table 3.1 lists the IRQ (interrupt request lines) usage for the ISA bus.
Table 3.2 lists the DMA (Direct Memory Access) assignments.
0 System timer
1 Keyboard
2 Cascade to IRQ 9
3 COM 2 and 4
4 COM 1 and 3
5 LPT2
TABLE 3.1 Default IRQ Assignments for the ISA Bus (continued)
6 Floppy controller
7 LPT1
8 Real-time clock
9 Cascade to IRQ 2
10 Available
11 Available
12 Bus mouse
13 Math coprocessor
15 Available
0 Available
1 Available
2 Floppy controller
3 Available
5 Available
6 Available
7 Available
As shown in Tables 3.1 and 3.2, available resources are limited. Normally
IRQ 5 is available because it is rare to have and use both LPT ports. It is
also rare to use a bus mouse today. Normally a PS/2 or USB mouse is used
today, leaving IRQ 12 as a free resource. This leaves five IRQs available for
expansion cards. Typically, IRQ 5 is used for a sound card and IRQ 10 for a
network card. This is not a computer law, but rather an unwritten rule that
technicians generally follow. The sound card can be configured with any
available IRQ, but many programs take for granted that the sound card is on
IRQ 5. That leaves three open IRQs for other devices such as a network card.
Most computers in use today are not the latest, greatest technology.
Many motherboards contain several different expansion busses. Brand
new motherboards will contain an AGP and several PCIs. Motherboards
that are a couple of years or more old will contain an AGP, PCI, and most
likely two or three ISA. The question is, what expansion cards do you pur-
chase for which bus? Obviously, if you have an AGP bus, then you should
try to purchase an AGP video card. This not only takes advantage of the
capabilities of the AGP, but also allows another expansion card to use the
PCI slot that the video card may have taken.
What do you do, though, if you have to choose devices to fit in PCI and ISA
busses? What if you need to install a network card and a sound card but
have just one PCI and one ISA bus left? How do you decide which expansion
card to purchase for which bus? The rule is to select the card that will be
under the most stress for the fastest bus. This will ensure that the faster
transfer rate of the bus will be put to good use. Odds are, if your computer
is to be networked, then the network card would be under more stress, so
it should be installed in the PCI bus. The sound card, although available in
a PCI format, would be the better choice for the ISA bus.
Memory
Inside the server are several different forms of memory. Each plays a
significant role in typical operation of the server. Memory in any form is
the means of storing information, either temporarily, semi-permanently,
or permanently.
RAM
Random access memory (RAM) is the most common memory within a
computer. RAM is physical memory, a collection of chips on a small
circuit board that attaches to the motherboard via a slot called a bank.
Most motherboards contain several banks for RAM installation. RAM
is volatile memory. When power to the computer is lost, the information
stored in RAM will be lost.
RAM has seen constant evolution over time. Early forms of RAM were
known as static RAM or SRAM. SRAM didn’t need constant refreshing
from the computer. Information was stored in a series of transistors that
made static RAM rather slow in sending and receiving information from
the processor. Newer RAM is called dynamic RAM (DRAM). DRAM
requires constant refreshing, and information is stored as electrical charges
within small capacitors. DRAM, due to its components, allows for high-
density packaging. This in turn creates RAM with larger capacities in much
smaller chips. Examples of DRAM are EDO, SDRAM, DDR SDRAM,
and RAMBUS.
EDO RAM
Extended data output RAM (EDO RAM) emerged in 1995. It provided a
performance increase of 10 to 15 percent over traditional memory. The
major downside to EDO RAM was that it had to be installed in pairs. If
you wanted 32MB of RAM, you had to install two 16MB modules. This
limited the number of available banks for RAM installation on a mother-
board, as well as options for RAM expansion. Six available banks on a
motherboard really meant three. Many motherboards of this era also had
specific sequencing for installing EDO RAM. For example, a computer
with 16MB installed could add another 16MB (two 8MB modules) or
32MB (two 16MB modules). EDO RAM is not in use today as a result
of these limiting factors.
SDRAM
This is the most common RAM in use today. Synchronous dynamic
RAM (SDRAM) runs at system bus speeds that translate into 66MHz,
100MHz, and 133MHz. These improved speeds over previous types of
RAM eliminated wait states between the system and RAM, which was an
issue in the past. A wait state is the time when the processor is waiting for
DDR SDRAM
Double data rate synchronous dynamic RAM (DDR SDRAM) is an
enhancement of SDRAM. DDR SDRAM provides double clock speed by
performing read and writes on both sides of the clock cycle (as opposed to
only working on one side). This translates into twice the memory executions,
and therefore increased system performance. A system with 100MHz mem-
ory bus speed will perform at an amazing 200MHz.
RAMBUS RAM
Direct Rambus RAM is the newest RAM available on the computer market.
It is extremely fast, with speeds up to 800MHz, and operates like DDR
SDRAM, working on both sides of the clock cycle. Rambus RAM is often used
in advanced, resource-intensive gaming systems, and is increasingly being
used in desktop computers.
ROM
Read-only memory (ROM), another important memory component, is used
to store permanent information for easy and quick retrieval. ROM chips,
much like RAM, have seen broad evolution, beginning at PROM and
moving to EPROM and EEPROM.
An EPROM chip
Cache Memory
Cache memory is located on all motherboards and operates much faster than
RAM. Cache memory stores information that is requested frequently, allowing
for faster access and response. L1 (level 1) cache memory is actually located
within the processor chip itself. L2 (level 2) cache memory is located on the
motherboard. Both L1 and L2 cache are designed to be used by the processor.
Cache memory is often found on other components as well, including
RAID cards and network cards. This cache memory provides functions to
these devices just as cache memory does for the processor: rapid storage of
and access to information.
Processor Slots
With the constant evolution of computer processors has come a change in
the way the processor connects to the motherboard. Selecting the right
processor to match a motherboard, or vice versa, is often a confusing and
difficult task. To make that task a bit easier, we have included a listing
that describes the common processor connection interfaces in detail.
Socket 1 Used with the 486 chip, often called a PGA (pin grid array) or
ZIF (zero insertion force socket). It has 169 pins and operates at 5 volts.
Socket 2 An upgrade of socket 1. Has 238 pins and runs at 5 volts. Used
for the original Pentium processors.
Socket 3 Contains 237 pins and operates at 5 or 3.3 volts, controlled by
a jumper on the motherboard. Supports all socket 2 processors as well as
the 5×86 chips.
Socket 4 With 273 pins, this socket was designed for Pentium-class
machines running at 5 volts. Beginning with the Pentium 75MHz, how-
ever, Intel dropped the voltage to 3.3 volts, so this socket had limited use.
Socket 5 Operates at 3.3 volts and has 320 pins. Supports Pentium chips
from 75MHz to 133MHz; socket 5 was replaced by socket 7.
Socket 6 Designed for the 486 at a time when the industry was moving
into the Pentium class; never really came into mainstream use.
Socket 7 The most widely used socket; contains 321 pins and operates
between 2.5 and 3.3 volts. Supports all Pentium-class chips from 75MHz
and up, and MMX chips. Also supports chips from AMD and Cyrix.
Incorporates a voltage regulator.
Socket 8 Designed primarily for the Pentium Pro chip. It has 387 pins
and operates between 3.1 and 3.3 volts.
Socket 370 A socket 7 with an additional row of pins on all four sides.
Used by Celeron processors as well as Celeron II and some Pentium III
processors (FC-PGA models).
Slot 1 A radical change in design, this was Intel’s “processor in a box.”
This boxed processor interfaces with the motherboard through what
appears to be an expansion bus, called a Slot 1. This design eliminated
the risk of bending processor pins, an all-too-common problem with
other socket interfaces. Inside the box was the same processor chip, but
preinstalled on a separate daughter card along with the L2 cache. This
was then shrouded in a heat sink and fan assembly box. Having the
processor in this format allowed for better air flow and cooling, but it
was bulky—standing on edge, it created spatial issues within the case.
Slot 2 Similar to Slot 1, but with a larger 330-contact connector slot.
This connector allowed the CPU to communicate with the motherboard
at full CPU clock speed. Slot 2 was designed for the newer Pentium chip
sets, including the Xeon processor.
Slot A Similar to Slot 1, Slot A uses a different protocol (EV6) and is
custom designed for the AMD Athlon processor. Using this protocol, the
processor-to-RAM communication can achieve speeds of 200MHz.
Socket A Using 462 pins, this socket is designed solely for the AMD chip
sets, including the Athlon and Duron processors.
Slockets A slot 1-to-socket 370 adapter. Allows a chip designed for a
socket 370 application to be used in a Slot 1 motherboard. Slockets are
not well received by many technicians. The modification between the
two interfaces is done with jumper settings and digital circuitry. Slocket
configuration is often compared to ISA expansion card configuration—
there can be serious consequences if it’s misconfigured.
Power Connectors
Electrical power interfaces with a motherboard in several ways. Main power
attaches to the motherboard through a single plug on an ATX board, or
through two smaller plugs on an AT/Baby AT. If your motherboard is of the
AT form factor, these two small plugs must be orientated correctly with
the black wires of each plug meeting in the center.
With the main power attached to the motherboard, the control of system
power is now given to the motherboard. Cooling fans, processor fan,
and startup/shutdown can all be electrically controlled and monitored by
the motherboard’s use of the electricity. For example, ATX motherboards
offer the soft shutdown feature. When you select Shut Down in the Windows
operating system, the computer will actually power down. This can be
combined with a boot initiated through a mouse movement, keyboard
hot key, or even a request through the network (called Wake up on LAN).
Most motherboards also have power connectors (2-pin or 3-pin) for cooling
fans. The fans plug directly into the motherboard where the RPMs and
airflow are controlled through the motherboard. This allows the mother-
board to maintain a consistent temperature within the computer case.
In a server environment it is very common to see more than one power
supply. This redundant power supply acts as a backup to the first one.
More time will be spent on redundant components in Chapter 5, “Fault
Tolerance and Redundancy.”
Keyboard/Mouse
Every motherboard needs a means of interfacing with the user. This is often
done through the keyboard and mouse. Once the server is up and running,
many administrators will remove the keyboard and mouse for safety and
security of the server. Access to the server is then done remotely through the
network. However, for operating system installation, a keyboard and mouse
are needed. Again, depending on your motherboard, a legacy free mother-
board may have only a few USB ports where a mouse and keyboard would
plug into. The problem is that some operating systems (Windows NT for
example) do not support USB ports. Typically, a server will have PS/2 ports
for a mouse and a keyboard. Older servers may still require a serial mouse
(which would use a DB-9 connector) and a DIN 5 keyboard connector.
33
22
22
11
11
Firmware
Firmware is defined as any software that is stored in read-only memory
(ROM, EPROM, EEPROM) and that maintains its contents when power is
removed. Inside the server are several components that will contain ROM
chips and therefore firmware. This list includes the previously discussed
CMOS chip, but also SCSI controllers and RAID controllers.
Firmware is commonly upgradable. This process normally requires
downloading the file from the manufacturer’s website. Although tempting, it
is extremely dangerous to download such files from third-party websites.
The validity and integrity of the file may be compromised. A failed firmware
update can leave you in a difficult situation. Once the correct file is down-
loaded (this will require careful matching of your hardware to the correct
download file), it can be run. The firmware update is extracted onto a floppy
disk. When the disk creation is complete, you simply reboot the computer
with the disk in and the update will occur automatically. In essence, the
firmware disk is a boot disk that runs a specific program that reprograms
the chip when the computer is booted.
Firmware updates are performed to provide new updated features and
support for the latest hardware or to repair problems with hardware. For
example, Asus Network Technologies released a firmware update for their
motherboards that will repair an issue with the soft shutdown. This firm-
ware update relates to the CMOS chip. Another example would be a firmware
update for a SCSI controller, which would provide advanced support for
new hard disk technology. No one can forget the Y2K issue—the fear that
computers would not be able to calculate the year 2000. Many systems
were repaired through a simple firmware update.
In a typical server there can be several components with firmware. It is
good practice to document each component, firmware revision number,
date, and manufacturer. Many manufacturers maintain a mailing list and
will notify you via e-mail when a new release is available. At that point,
you decide whether to update or not. The benefits of the update must be
useful to your specific application. Keep in mind that there are risks with
every update. The possibility of the device not working properly because
of conflicts with other devices, other software, or the operating system
are realistic consequences. Always perform a backup of all data as well
as a full compatibility check before performing any update to a server.
Processors
Processors have seen the most rapid change over the last couple of
decades. In the server environment, processors can face incredible stresses.
Selecting the right processor to meet your needs is therefore critical. Before
looking at the current processor types, you need an understanding of the
important features.
Clock speed is the main element on which most people focus when they
talk about processors. Clock speed is measured in millions of cycles per sec-
ond—megahertz (MHz). Instructions are carried out based on clock speed,
analogous to a musician playing to a metronome. Clock speed is not the only
factor in processor performance but it is a major factor. The faster the clock
speed, the faster instructions can be carried out. Latest releases of processors
have exceeded the megahertz classification and moved into the gigahertz
range. Currently processor speeds have reached 2GHz.
L1 cache, as previously mentioned, provides fast access for the processor.
Therefore processors with larger quantities of L1 cache will perform better.
Higher cache is also directly proportionate to increased price. Server proces-
sors will commonly have higher L1 cache than desktop processors.
Voltage is another consideration. Lower voltages in processors will
generate less heat, and lower heat will allow for smoother and more stable
operation. Old processors ran at 5 volts, and this generated a steady amount
of heat; however, due to the slower clock speeds, the small heat sinks were
able to handle heat dissipation. With newer processors, the heat generated
from the faster clock speeds combined with the 5 volts of direct current
electricity made the processor unstable. Most of today’s processors run
at 3.3 volts to combat this problem.
Intel Processors
It is common knowledge that Intel has had a strong hold on the computer
processor market for a long time. Today there are several other manufac-
turers of quality computer chips on the market, but Intel still has a dominant
hold on the server market. Table 3.3 defines the major Intel processor
classes, speed ranges, and specifications as seen today.
AMD Processors
AMD processors, as related to servers, are rather new on the market. AMD
(Advanced Micro Devices) only recently introduced their server line chip,
the MP, to compete with the Intel server line. The MP chip is available in
a 1.2GHz and 1.0GHz clock speed and fits onto a socket A motherboard.
Standard AMD processors include the Athlon and Duron, designed to
compete with the Pentium and Celeron, respectively. The benefit of the AMD
processor line is the fast bus speeds discussed previously in the socket A
description. When matched with an appropriate motherboard, the AMD
processor performs efficiently. (See Table 3.4.)
Alpha Chips
DEC (Digital Equipment Corporation) introduced a 64-bit processor in
1992 called an Alpha chip. Recently Compaq released a series of servers
featuring the Alpha chip, which has a superscalar design allowing the pro-
cessor to execute more than one instruction per clock cycle. It has both
an 8K data and an 8K instruction cache, and a floating-point processor.
Compared to the other processors that calculate on one side or two sides
of the clock tick, Alpha processors have a definite advantage. This has
obvious benefits in a server environment where a CPU is often required
to process a multitude of requests at a time.
Cooling
B ased on the previous information on processors, it is easy to under-
stand that cooling in a server environment is critical. Regardless of the
manufacturer or model of your CPU, it will need a way to deal with heat
buildup. The faster the clock speed of the CPU, the more heat it generates,
and server processors tend to run at a high clock speed. Remember that some
servers have several CPUs. Most servers have numerous cooling fans to
assist with heat dissipation. With the dangers of overheating, fans are often
clustered together in groups with cowlings to channel air through the server.
It is also advisable to have multiple fans to allow for redundancy—should
a fan fail, another will maintain the airflow.
location where it will face direct sunlight. Besides sunlight, other potential
concerns would be location of water pipes and electrical wires (electrical
fields). All of these could compromise server safety.
Overclocking
I n the world of desktop computers, overclocking is the latest rave.
Overclocking involves forcing your computer (usually the processor) to
run harder and faster than the manufacturer intended. All processors per-
form within a specific range. There is a window by which this range can be
extended and the processor made to work at the upper levels of its capabil-
ities. This is comparable to athletes who use steroids. Performance is better
but there are serious side effects. With increased processing capabilities
comes increased heat. Overclocking demands improved processor and case
cooling. Some users who have overclocked their systems go so far as to create
custom fan and heat sinks to deal with the heat issue. Before you decide to
overclock, make sure you carefully weigh the possible consequences: short-
ened life of the CPU, overheating, and damage to other components such as
the motherboard.
When related to servers, overclocking is frowned on. The processor in a
server is responsible not only for the running of the server but also shared
applications, printers, file security, authentication, Internet, and e-mail, so
the processor is working very hard already. Forcing it to work faster can lead
to catastrophic consequences. Remember, you can get along without a desk-
top for a while, but can you remain productive without your server?
Summary
T his chapter began with an exploration of motherboards. Integrated
motherboards have several common components built into the motherboard
that would otherwise be on expansion cards. These include video, audio,
modem, and network cards. Non-integrated motherboards require separate
expansion cards for each component. Benefits of integrated motherboards
include lower price, while the major drawback is the danger of component
failure that would result in replacing the entire motherboard.
Intel and AMD designed processors especially for the stressful environ-
ment of a server. With these advanced processors comes the need for added
cooling. This discussion led into a section on overclocking—a risky practice
that can result in overheating.
Exam Essentials
Know the differences between integrated and non-integrated mother-
boards. Make sure to have a strong understanding of the differences,
advantages, and disadvantages of both motherboard styles.
Be able to identify the differences between motherboard form factors.
Identify AT/ Baby AT, ATX, and NLX form factors and their limitations
and benefits.
Be able to label the common components of a motherboard. This
includes expansion slots, RAM banks, processor socket, CMOS battery,
CMOS chip, power connector, and on-board controllers.
Understand the differences between common expansion busses. Be able
to identify the busses by speed, configuration, and use.
Know the IRQ and DMA resources for the ISA bus. Know what
resources are in use for what devices, as well as what resources are
available.
Know the different types of RAM memory. Know the differences
between EDO, SDRAM, DDR RAM, and Rambus RAM. Be aware
of performance differences between the types of RAM.
Be able to explain the different types of ROM. Understand the
different levels of ROM, including PROM, EPROM, and EEPROM.
Know the different processor slots and supported processors. This
includes all sockets, slots, and slockets for Intel chips as well as the AMD
processors.
Know what firmware is and which common components have
firmware. Be able to identify a firmware chip as well as its purpose.
Know the processors that are available for use within a server. Be aware
of voltages, speeds, and common names for processors from both Intel
and AMD.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. What is the difference between integrated and non-integrated
motherboards?
A. Integrated motherboards have built-in components normally
found on expansion cards.
B. Non-integrated motherboards are newer technology.
B. Video
C. Modem
D. SCSI controller
E. Hard disk
B. ATX
C. MCA
D. NLX
C. Riser card
D. ZIF processor socket
A. PCI
B. AGP
C. ISA
D. IDE
B. 32-bit at 33MHz
C. 32-bit at 66MHz
D. 64-bit at 66MHz
A. 16-bit or 32-bit
B. 32-bit or 64-bit
C. 32-bit or 32-bit
D. 64-bit or 64-bit
A. Jumpers
B. Software
A. 8/10MHz
B. 6/8MHz
C. 10/12MHz
B. 7
C. 3
D. 9
A. SD RAM
B. EDO RAM
C. RD RAM
D. Rambus
A. PROM
B. EPROM
C. EEPROM
D. ROM
B. Level 3
C. Level 1
D. Level 4
18. Which socket is used for Pentium processors that run at speeds
up to 75MHz?
A. Socket 3
B. Socket 4
C. Socket 7
D. Socket 5
A. Socket 7
B. Socket 5
C. Socket 4
D. Socket 3
10. C. Resources for a PCI bus are configured automatically through the
BIOS settings.
11. A. An ISA bus can transfer at speeds of either 8 or 10MHz.
12. B. LPT1 is assigned IRQ 7 by default.
14. B. EDO RAM is installed in pairs. This was a major limitation for
this type of RAM.
15. A. SD RAM is short for synchronous dynamic RAM. This RAM
works to a synchronous clock cycle.
16. C. Electronically erasable programmable read-only memory can be
digitally erased, or flashed, as it is often called.
When talking about a physical disk, we are referring to the actual device
located within a server, or connected to the server externally. For example,
a computer with two hard disks will have two physical disks. If you can hold
it, it’s a physical disk. Physical disks can refer to a variety of devices other
than hard disk drives, such as floppy disks, compact discs, and tape drives,
to name a few. The number of physical disks that your computer can have
is based on the interface type. As an example, SCSI technology allows for
more disks per controller than IDE.
A logical disk is not quite as straightforward as its physical counterpart.
One of the most common ways to define a logical disk is any space of a hard
disk (or other storage unit) that has its own disk letter.
As you learned while studying for your A+ exams, a hard disk can be par-
titioned, or divided into smaller sections. Each partition receives a drive let-
ter and therefore can be considered a logical disk. It is not uncommon for a
computer to physically have one hard disk but logically have two. This often
leads to considerable misunderstandings for those who are new to the com-
puter world. Many times people purchase a new computer assuming there
are two hard disks included in the system. After all, in the Windows envi-
ronment, they see two hard disks labeled C and a D. Later on they discover,
by either removing the cover themselves or having someone inform them,
that they actually have only one physical hard drive.
Windows-based systems can support up to 23 logical disks each. After
that, we run out of letters in the alphabet. Yes, the English alphabet has 26
letters, but remember that A, B, and C are reserved for two floppies and the
first hard disk. NetWare and Unix do not use drive letters. They name their
logical disks (called volumes) by referring to the logical unit by machine and
then a volume name. For example, on a NetWare server, there is always a
SYS volume. If your NetWare server is named NW1, the volume name (log-
ical disk name) would be NW1/SYS:.
In a network environment, you will frequently work with multiple logical
disks. You will often find network mapped paths, which also qualify as a
logical disk. These mapped paths are given a unique drive letter and for all
intents and purposes act as another hard disk for the end user. The difference
is that these disks do not reside on the end user’s computer but rather on the
server, or other computer, and are accessed through the network. Network
mapped paths are a logical pointer to a physical resource. This tricks the cli-
ent computer into thinking that it has another hard disk, which in actuality
is a part of a hard disk on another computer. Your computer may think it has
an H disk, and it’s right. The trick is, the information is physically located
somewhere else.
Should the host computer lose connection to the network, the mapped
path will not be accessible. Figure 4.1 illustrates the view of a mapped path
through the My Computer icon in Windows 2000. Notice how the mapped
path icon for disk F is similar to that of the hard disks, but has a small
network wire connector visible below the disk picture. This small network
wire connector is there to remind you that this is a mapped drive. If the
network-mapped disk is unavailable, it will have a red X on the network
wire connector.
Also on a network, you may run into multiple physical devices that form
one logical disk. A good example is a RAID 1 or RAID 5 array. There will
be at least two physical hard disks (three for RAID 5) that appear as one
logical unit.
So with all this talk of physical versus logical disks, what’s the real deal?
It seems slightly confusing. Just remember that if you can hold it, it’s a
physical disk. However, logical disks are all about how you define them.
A physical hard disk may have multiple logical disks, and at the same time
multiple physical hard disks can make up one logical disk. It all depends on
how you set it up. Last but not least, logical disks can also be physically
contained on another machine, but you have a drive letter on your machine
representing it.
Storage devices interface with the computer through two possible means:
either an IDE or SCSI interface. Because the connectors are so different,
IDE devices cannot be on a SCSI controller, and vice versa. However, some
computers do have connectors for both types of disk. Depending on costs
and requirements, choosing one over another is sometimes difficult in the
desktop/workstation environment. In the server environment SCSI is the
dominant storage device because it is much more extensible, it is compatible
with a wide variety of devices, and it has evolved to operate at many different
levels and speeds. Basically, it’s faster and more expandable. However, it is
also more expensive. In the next two sections, we will explore each option,
for a means of comparison and also for configuration purposes.
IDE Technology
IDE (Integrated Drive Electronics) technology was first created as a simple
means of adding components to a computer. Today, this technology is more
commonly associated with ATA (Attachment Interface) technology. The
controller circuitry is located right on the device itself. The device is then
attached to the motherboard or expansion card with a short 40-pin ribbon
cable. Most cables are keyed so they will fit on only one way. If the cable is
not keyed, then the rule is that the red stripe on the cable connects to pin 1.
Closely examine the cable and you will see that one side of the ribbon has
a red line (although sometimes it’s blue). Also a close examination of the
device will reveal that the connector is labeled with a pin 1. If you can’t
locate pin 1, it’s always on the side of the connector closest to the power con-
nector on the hard disk. A simple jumper configuration (discussed later in
this chapter) provides a means of configuration, and the IDE device is
installed and running.
The original release of this technology supported disks up to 528MB in
size and transferred at a speed of 3.3MBps. With increased disk size ATA-2
(version 2) was released that supported several gigabytes of disk size as well
as increasing speed to 11.1MBps. This release was commonly known as
EIDE or Enhanced IDE technology. The latest release is Ultra DMA/33
(Ultra Direct Memory Access 33) IDE. It can transfer at speeds of 33MBps.
Newer versions of Ultra ATA IDE can support 66MBps and 100MBps. They are
called Ultra ATA/66 and Ultra ATA/100, respectively.
Regardless of the release or version, IDE has one major limitation, which
is that it can support only two devices per controller. In the days of its incep-
tion this was not seen as problematic because hard disks were the only
devices supported. However, with the creation and widespread use of CD-
ROMs, DVDs, and CD-RWs, the need to support multiple devices grew. All
new motherboards tried to meet this demand with support for two separate
IDE channels, often referred to as IDE controllers (primary and secondary).
This would then allow for four devices (two on each channel). In today’s
world of multiple physical disks, burners, and DVD drives, IDE technology
is not the most flexible. Also, 33MBps may seem wonderfully fast, but
compared to SCSI it takes a back seat. And because of overhead associated
with the technology, you will only get about 75 percent of the theoretical
maximum transfer rate with IDE hard disks.
In the server environment speed is essential. Access time to server hard
disks when multiple requests are incoming can add up to rather long wait
periods. If the request is from an application such as a database, the possi-
bility of a time-out error is real. A time-out error happens when the program
gives up waiting for a response from the server. IDE devices have tried to deal
with wait state issues by increasing hard disk spin rates (the same holds true
for SCSI). Typical hard disks spin at 5,400 RPM (revolutions per minute,
referring to the speed at which the platters spin). New releases in IDE
hard disks include 7,200 RPM and 10,000 RPM. Theoretically the faster
the hard disk can spin the faster that the actuator arm and read/write head
can get to the data stored on the platters. This has improved performance in
hard disks but still doesn’t deal with the primary issue with IDE technology:
support for only two devices.
Setup Jumpers
Jumpers- Style A
ATA Hard Drive, Rear View
Most administrators avoid using CS on their hard disks because it slows down
the boot process. This is because during the boot process the computer has
to scan the IDE cable to detect if devices are present and then determine which
is to be master and which slave.
position to further assist with speed to the primary boot disk. If you have a
CD-ROM and a CD-RW it is advisable to connect both on the secondary
channel. This will help to prevent buffer under run errors that can occur
when the CD-ROM is on a different IDE channel from the CD-RW.
Once the jumpers are configured for the master and slave settings, the IDE
devices can be installed into the computer case. The final step involves enter-
ing the setup utility when the computer is booting. In your BIOS setup is an
option to manually specify what is connected to each of the master and slave
positions for both IDE controllers, or you can have the BIOS autodetect
these devices each time the computer is booted. If you choose to have the
BIOS autodetect these devices, expect the boot process to become slower.
An option that can be very helpful when installing a hard disk is the Auto
Hard Drive Detect. This utility, found within the BIOS setup program, will
scan the IDE channels and try to identify the hard disks and appropriate
settings for you. After the utility scans the disks, it will present you with its
findings for you to accept or reject. If you accept, they will be automatically
set in the BIOS.
Older BIOSes did not have the ability to autodetect hard disks. You had
to enter in configuration information manually. Even though it may be a
bit slower to have your computer autodetect the hard disks, it’s still a
valid option.
Common IDE devices used today include hard disks, tape drives, CD-
ROM, DVD, CD-RW, and internal Iomega Zip drives. Combinations of
these devices are found in every computer from laptops to servers. Each uses
the previously mentioned jumpers and master/slave settings.
With the limitation of two available devices per channel, IDE is not
common in servers. Many servers contain more than two hard disks alone,
which would leave no connectivity options for a CD-ROM drive or a backup
drive. Transfer speed is another concern. Although IDE speeds seem impres-
sive in a stand-alone computer, when placed in a server environment where
the speed is shared among many client computers, IDE struggles at times
under the demands. More common in a server is the SCSI structure for stor-
age devices. This is not to say that there is no IDE at all in servers. Server
motherboards normally contain at least one onboard IDE slot.
Some years ago, I was working as a consultant for a small insurance office.
They only had four computers, and no servers. It was a simple configura-
tion, and they didn’t require a lot of maintenance.
They came to the point where they needed additional storage space in the
workstation that held their database. So, they bought an additional hard
drive, and called me to put it in.
I arrived at the office, powered the machine down, and removed the case. I
grounded myself properly (notice, good ESD safety being practiced), and
removed the existing drive. Sure enough, the drive was not jumpered as a
master. So, using the diagram on the drive itself, I set the jumpers to mas-
ter. Looking at the new drive, I jumpered it as a slave. I put both drives back
in the machine, and powered it up.
It didn’t boot. So I checked the system BIOS, and manually configured the
master and slave settings based on the drive parameters. We rebooted
the machine, and still nothing. Just as a test, we changed the jumpers
on the new drive to make it a single, changed the BIOS, and sure enough,
the drive booted. So we knew that the new drive was good. And the old
drive had been working a few minutes ago. So what could be the problem?
We set it back up again with the old drive as the master, and the new drive
as the slave, and got the same results. It didn’t want to play. Looking at the
drives again, I noticed that the old drive was a Seagate, and the new one
was a Maxtor. Puzzled, I switched the master/slave relationship of the
drives, changed the BIOS, and the system booted fine.
The moral of the story is this: sometimes when dealing with multiple
IDE drives, they may not work in a specific master/slave relationship. Try
switching them to see if that helps. It’s not necessarily a Seagate versus
Maxtor thing, but it is more common when you use drives made by differ-
ent manufacturers. So if you can, stick to using drives made by the same
company. It’s also somewhat common for older drives to not work as mas-
ters to newer drives. Again, change the master/slave relationship, and you
should be okay.
SCSI TECHNOLOGY
SCSI (Small Computer Systems Interface) is far more robust than the IDE
structure. Unfortunately it is also far more complex in configuration and
setup, and more expensive. When you talk about SCSI devices, the discussion
is not limited to hard disks. Available SCSI devices include a broad range
of internal and external components. In the server environment, the range is
often dominated by hard disks, tape backup drives, and CD-ROM drives.
However, there are also SCSI scanners, optical devices, and others. The fol-
lowing section focuses on the fundamentals of SCSI hard disks.
The SCSI standard was put into effect in the mid-1980s and specifies a
universal, parallel, system-level interface for connecting up to eight devices
(including the controller) in a chain on a single shared cable. This grouping of
devices is called a SCSI bus. SCSI busses are extremely flexible in design. The
SCSI controller card controls the devices, so you can be confident that, as long
as the card works in the computer, then the SCSI devices will also. Therefore
SCSI devices will perform equally well in a PC, a Mac, or a Sun Microsystems
workstation, as long as the controller card itself works with the operating
system and other hardware. The SCSI controller card contains its own config-
uration as well as firmware. SCSI supports many more devices than available
in the IDE technology, and also transfers information at much faster speeds.
All SCSI configurations require termination at both ends of the chain.
If there is no termination, the signal will bounce back and forth along the
chain, causing the devices to fail. SCSI adapters have a terminator built in,
and you must supply the terminator at the other end. SCSI devices are iden-
tified by a SCSI ID number. The controller typically takes ID 7, and the
devices get 0 through 6.
SCSI Types
SCSI technology has seen a constant and dramatic change since its incep-
tion. The first release of SCSI technology was rather awkward and limiting.
Still, the potential was clearly evident. Later releases improved on predeces-
sors in areas of speed and reliability. The following is a brief look at the
essential elements of each major SCSI release.
SCSI-1 The first true implementation of SCSI was SCSI-1, created in
1986. It had a 5MBps transfer rate and used a Centronics 50-pin cable or
a DB-25 female connector with an 8-bit bus width. SCSI-1 was based on
single-ended transmission and used passive termination. Passive termi-
nators had only resistors to terminate the bus, as opposed to active ter-
minators that have voltage regulators for added reliability. The original
release of SCSI was not without problems. While there were standards,
there was inconsistent implementation of the standards by vendors.
SCSI-1 is now obsolete. If you mix SCSI-1 devices on a bus with other SCSI
devices, performance will degrade.
SCSI-2 The goal of SCSI-2 was to improve on performance and reliabil-
ity, and to enhance features. SCSI-2 was also needed to standardize the
commands used with the technology. SCSI-2, which was backward com-
patible with SCSI-1, introduced a higher-density connector and both an
8-bit and 16-bit wide bus. The 16-bit bus was known as Wide SCSI-2.
SCSI-2 also introduced a faster speed release called Fast SCSI-2, which
used the 8-bit bus but at a speed of 10MBps. It was also possible to com-
bine the best of both Wide SCSI and Fast SCSI to get SCSI-2 Fast-Wide
(16-bit at 40MBps). SCSI-2 also used active termination, which is more
reliable than the passive termination used in SCSI-1.
Wide Ultra-2 SCSI Wide Ultra-2 is a step up from SCSI-2. This release
provided LVD or HVD signaling, a 16-bit wide bus, transfer speeds of
80MBps, LVD or HVD termination, and used a 68-pin connector.
Ultra-3 SCSI Ultra-3 is the latest SCSI standard. Ultra-3, also called
Ultra SCSI, operates at a faster 20–40MBps, which was a definite
improvement over previous SCSI releases. Ultra-3, however, addressed
another problem: cable length. With SCSI-3, LVD was introduced. Low
Voltage Differential increased the possible length of the cable to 25 meters
with a possible transfer speed of 160MBps. Ultra-3 SCSI operated at a
16-bit wide bus, with LVD signaling and termination. Ultra-3 used a
68-pin connector.
Ultra 160 This release is a subset of Ultra-3. It is a parallel interface that
uses a 16-bit wide bus and LVD signaling and termination, and has a
maximum transfer speed of 160MBps. Although similar to Ultra-3, Ultra
160’s faster transfer speed and LVD addition warranted the creation of
this new SCSI category to prevent compatibility issues between device
vendors. Ultra 160 also used a 68-pin connector.
Ultra 320 SCSI Ultra 320 is the next generation of parallel SCSI inter-
face. At one point it was called SCSI Ultra-4. It is a 16-bit wide bus that
uses LVD signaling, LVD termination, a 68-pin connector, and has a
transfer speed of 320MBps.
Table 4.1 helps to clarify the various releases of SCSI and their speeds and
cable specifications.
Transfer Cable
Type Bus Width Rate (MBps) Connector Length
SCSI-1 8 5 DB-25 6m
SCSI-2 8, 16 5 C-50 6m
Transfer Cable
Type Bus Width Rate (MBps) Connector Length
50-pin connector
Ultra160 LVD connector
50-pin connector
32-bit PCI
SCSI adapter cards can vary from a simple controller card that is
packaged with SCSI scanners to very expensive multichannel models.
The type of card you choose will depend on your budget as well as needed
performance.
SCSI ID= 0
SCSI ID= 1
SCSI ID= 2
SCSI ID= 3
SCSI ID= 4
SCSI ID= 5
SCSI ID= 6
SCSI ID= 7
Most SCSI cards are configured to ID 7 by default, but you might sometimes
need to reconfigure your adapter card to a different ID. You do this through
the software configuration utility.
SCSI Termination
SCSI termination seems simple enough. Place a terminator at the beginning
and at the end of the SCSI chain (just as you learned while studying for your
A+ exam) and everything should work. However, the majority of configura-
tion problems with SCSI occur with termination or ID numbering. SCSI
termination can be a difficult task because of the many variables you must
consider.
First, termination must occur at the ends of the SCSI chain. Most internal
devices are terminated through the use of a jumper. Be sure to locate the cor-
rect jumper and apply it appropriately. Each manufacturer has a specific
sequence in which to properly apply or remove termination. If the SCSI
controller card is to be terminated, it is normally done through software con-
figuration. During system bootup you will see an option to enter the SCSI
configuration utility, in which adapter card termination can be enabled or
disabled. Where termination takes a much more difficult turn is when exter-
nal SCSI devices are introduced. When both internal and external devices are
present, the termination is removed from the SCSI adapter card and then
configured on the device at the end of the internal and external chains. With
external termination, a secondary device is often needed. Depending on the
type of SCSI bus, the terminator will vary. There are five basic types of SCSI
termination.
Passive Termination consists of a 220-ohm resistor that connects to the
terminator and a 330-ohm resistor that connects the signal line to the
ground. Passive termination is less expensive but can lead to issues with
line noise and dirty signals. Passive termination is not recommended for
SCSI-2 configurations. HVD SCSI, however, does use passive termination.
Active Termination was created to eliminate signal problems experi-
enced with passive termination. Active termination is based around a
voltage regulator, which reduces fluctuations. Active termination uses
a single 110-ohm resistor. SCSI-2 uses active termination.
FPT uses diode switching and biasing to fill any fluctuations between
the cable and devices. FPT (Force Perfect Termination) is a more
advanced form of active termination.
LVD is based on a form of active termination. Low Voltage Differential
(LVD) is based around the higher speeds of SCSI Ultra-2. Special LVD/SE
(single-ended) terminators can be used on busses that have both LVD
and SE devices.
After all the devices are attached and installed, review each device to con-
firm proper cabling and termination. When you are sure that installation is
complete, you can start the server and enter into the SCSI utility. Depending
on the manufacturer, this can be done in several ways. Normally access is
gained during the bootup of the computer. A line of text will appear on the
screen telling you to press a specific key or sequence of keys to enter the SCSI
setup utility, where you can confirm that each device is identified by the SCSI
adapter (based on the SCSI ID and LUN) and is functioning properly.
Most SCSI cards are terminated by default. If you are connecting internal
and external SCSI devices to your chain, be aware that you will have to
disable the termination on the adapter card. Figure 4.6 is an example of an
external terminator.
SCSI Cables
SCSI cables play an important role in the SCSI chain. SCSI cabling today
comes in many different forms. Remember that SCSI devices can be internal
or external. Internal cabling differs significantly from external cabling in
terms of durability as well as reliability. Consider that cable connectors also
vary depending on the SCSI standard being implemented, and it becomes
easy to see that there are numerous possibilities.
Internal cables follow two different forms:
Standard ribbon cable (similar to IDE and floppy cable) is commonly
found within the server case. This cable normally is 68 wires wide (to
accommodate the 68-pin connector) but can also come in a 50-pin form.
Newer internal cable is twisted-pair cable and looks a little like spa-
ghetti. This cable is round, not flat like ribbon cable, and has twisted
pairs of wires for each pin. The idea behind using the twisted pairs is
to reduce signal degeneration. This cabling costs more than traditional
ribbon cable, but does improve signal stability. It is often used in
longer SCSI bus implementations. Twisted-pair cable can also be
found with metal braided shielding surrounding the twisted pairs,
which protects further against signal interference.
External SCSI cables need to be more durable than internal cables. Being
exposed to the environment and environmental hazards, these cables have a
strong external sheathing to protect internal wires. Many external SCSI cables
also contain ground shielding, which protects against signal interference.
SCSI Connectors
SCSI connectors physically attach the drive to the cable. Several different
types, such as Centronics connectors, are available to meet the demands of
bus width and speed. Figure 4.7 illustrates some common connectors that
you may encounter.
Very High Density Centronics 68-pin 80-pin connector used by Ultra 160
used on SCSI-3 and Ultra 2
Disk Arrays
If you have multiple disks in a RAID configuration, you should first check
with your manufacturer’s documentation. If software is controlling the
array, it may be difficult to expand. As an example, in Windows NT and
Windows 2000, if you have RAID 1 or RAID 5, you cannot expand it
without deleting the existing array and creating a new one. Of course, if
you do this, please make sure to back up (and test) your existing data first.
(See Chapter 5, “Fault Tolerance and Redundancy” for more information
about RAID.)
Many manufacturers provide external disk arrays for use with servers.
The disk arrays connect to the server through a proprietary expansion card.
These devices are often very expensive, but have some major benefits. Most
of these external storage units contain their own processor and memory,
which makes them very fast, and they do not drain excess resources from
your server. They are also very expandable. Also, most of them use hot-
swappable disks. If one fails, you will get a red indicator light next to it.
Simply pull it out and put a new one in, and the unit will integrate the
disk automatically for you. Technology is a beautiful thing.
Summary
T his chapter covered essential details for using hard disks as storage
devices. First, we talked about hard disk structure, differentiating between
physical and logical disks.
We then discussed IDE hard disk drives. While IDE may not be as fast as
SCSI, it’s very commonly used because of its lower cost. IDE devices are eas-
ier to configure, but you are limited to two IDE devices per IDE controller.
When using two devices on one controller, you need to set one device as
master and the other as slave.
SCSI is the most popular hard disk technology used in servers. It’s fast,
and allows for large numbers of hard disks per machine. SCSI has many dif-
ferent standards, but all are based on backward compatibility. SCSI devices
are somewhat harder to configure than IDE devices, are more expensive, and
require termination. The benefits of SCSI over IDE include greater perfor-
mance, flexibility, and support for internal and external devices. Last, we
looked at some administration tips for managing your disk storage solutions.
Exam Essentials
Know the difference between a physical and logical disk. The actual
device located within a server, or connected to the server externally, is the
physical disk. A logical disk is defined by its drive letter.
Know what a mapped disk is. A mapped disk is a path on a client
workstation pointing to a network disk or network share. In Windows
Explorer it appears to the client as a physical disk although it is actually
a path to a logical disk or share on another computer.
Know the three configurations for IDE hard disks. Using jumpers, you
can set IDE devices in one of three configurations: master, slave, or cable
select. You can choose cable select if the devices are capable of performing
automatic selection of correct master/slave configuration.
Know the major limitations of IDE in a server environment. One of
the first limitations of IDE is its support for only two IDE devices per
channel. It also lacks the transfer speeds often needed within a server.
Know the three SCSI signaling methods. There are three different sig-
naling methods: single-ended (SE), High Voltage Differential (HVD), and
Low Voltage Differential (LVD).
Know the common SCSI standards. The various SCSI standards also
have varying bus widths, transfer rates, connector types, and cable length
requirements.
Know the key SCSI configurations. Configuring SCSI devices includes
setting the SCSI IDs and LUNs, and ensuring proper termination.
Understand SCSI IDs and LUNs. Each device must have a unique SCSI ID
that uniquely identifies each component on the SCSI chain. For a SCSI device
performing multiple functions, the LUN is used to identify each one.
Understand SCSI termination. SCSI termination must occur at both ends
of the chain. Usually the controller is terminated at the last device in the
chain. The four basic types of termination are passive, active, FPT, and LVD.
Know the various SCSI cables. Depending on the device, SCSI cabling
can be internal or external. Internal cables follow two forms: standard
ribbon and twisted-pair.
Know the benefits of SCSI over IDE. SCSI provides the following
benefits: faster transfer speeds, support for more devices, and decreased
processor load.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. You are the network administrator for your company. One day, your
boss walks in beaming about a new technology she read about called
IDE. However, your boss forgets what it stands for. What does IDE
stand for?
A. Industrial Drive Electronics
B. Two
C. Three
D. Four
D. A-Terminal Attachment
4. You are configuring a new server for your company. One of the junior
employees wants to know what types of devices are going to be plugged
into the IDE controllers on the motherboard. Which of the following
is the least likely device you will plug into an IDE controller?
A. Printer
B. Hard disk
C. CD-ROM
D. DVD
5. You have just installed brand new Ultra ATA/100 hard disks in your
server. However, when you monitor the disk performance, you notice
that you are only getting approximately 25MBps throughput. What is
the most likely problem?
A. The hard disk is defective and must be replaced.
B. Jumpers
C. Software
D. DIP switches
7. You have just installed a second hard disk into your server. You are
using IDE devices. However, when you boot the machine, it hangs up
during the hard disk detection portion of the POST. You quickly place
the disk into another machine, and it boots fine. What is the first thing
you should check to get the hard disk working properly in your server?
A. Check the system BIOS to make sure that Ultra DMA parity
checking is enabled.
B. Check the system BIOS to ensure that LBA is enabled for the
hard disk.
C. Check the back of the hard disk to make sure it is properly
terminated.
D. Check the back of the hard disk to ensure that you have the proper
master/slave configuration.
B. Six
C. Seven
D. Eight
10. Your SCSI chain is composed of devices that all use LVD signaling.
You have an older SCSI device that you want to add to the chain,
but you are not sure what type of signaling it uses. What should you
do and why?
A. Do not install it because if it’s an SE device it could ruin the system.
11. You are the junior network administrator at your company. The
senior administrator is in a rush to fix the server, and tells you to get
a SCSI connector out of the parts box. When you ask what kind, she
tells you to get a connector for an SE device. What type of connector
do you grab?
A. 50-pin narrow
B. DB-35
C. Centronics 68-pin
D. DB-9
12. You are installing a new SCSI hard disk into your server to act as the
boot disk. The only SCSI ID you have available is 5. You configure
the disk for that ID and boot the server. However, the system is boot-
ing from the old disk, not the new one. What could be the problem?
A. The new disk is malfunctioning.
B. The SCSI adapter will only boot from a disk with a SCSI ID of 0.
C. The SCSI adapter will only boot from a disk with a SCSI ID of 7.
A. Centronics 50
B. Centronics 68
C. DB-25
D. High-density 68-pin
A. 8
B. 16
C. 12
D. 4
15. You are in the process of upgrading your SCSI controller in your
server to SCSI-3. You tell your boss that it’s because of the enhanced
bus speed of the newer technology. What bus speeds does SCSI-3
operate at?
A. 10–20MBps
B. 20–30MBps
C. 20–40MBps
D. 30–40MBps
16. You install a new SCSI hard disk into your server. After rebooting the
machine, the new hard disk is not detected. You place the disk in
another machine that has no other SCSI devices, and it responds prop-
erly. What could be the problem?
A. The disk is malfunctioning.
B. The SCSI ID that the disk is using is already being used by another
device in the first machine.
C. The SCSI ID on the disk is set to an invalid number.
D. The LUN ID that the disk is using is already being used by another
device in the other machine.
17. You are using narrow SCSI technology in your server. You already
have maxed out the number of possible devices in your SCSI chain.
You need to expand the server by adding an additional three hard
disks. What do you do?
A. There is nothing you can do, as computers can only have one SCSI
adapter each.
B. Add the hard disks to your IDE connector on the motherboard.
C. Add the devices to the existing SCSI adapter, and place the termi-
nator at the end of the new chain.
D. Add an additional SCSI adapter to your server, and attach the
disks to it.
18. You are in the process of adding SCSI devices to a new SCSI adapter.
When you attempt to set the first device, you are not sure which ID to
use. You have not changed the default configuration of the SCSI adapter.
What is the most likely SCSI ID assigned to the SCSI adapter cards?
A. 0
B. 9
C. 7
D. 3
19. Your boss has instructed you to create a fast new server for your net-
work. Cost is not an issue, but speed is, and you need to ensure nearly
100 percent uptime. The boss would prefer to be able to expand the
storage capabilities and replace failed disks without taking the server
down, if possible. What type of solution should you implement?
A. Use internal IDE hard disks for the new server.
20. Which of the following devices is the least likely to be found connected
to a SCSI controller?
A. Hard disk
B. Modem
C. CD-ROM
D. Scanner
16. B. Each SCSI device on a chain must have a unique SCSI ID.
17. D. Computers can have more than one SCSI adapter. If the first one
is full, add a second to accommodate the additional devices.
18. C. Generally speaking, SCSI adapter cards are by default assigned
SCSI ID 7. This makes the card the highest priority item in the SCSI
chain.
19. C. Internal SCSI and IDE disks are not hot-swappable. Although
third-party storage solutions can be expensive, they are also very fast,
and often have built-in fault tolerance and redundancy as well.
20. B. A modem is not considered a SCSI device.
reboot. However, this is not the only type of problem that can occur within
a computer. These are the three broad fault categories you might see:
Computer Hardware Faults occur when a hardware component fails.
For example, a network card fails, resulting in no access to the server via
network communication. Hardware fault tolerance can provide redun-
dancy by supplying several (or at least two) network cards. The network
cards can be configured to monitor each other. When the primary net-
work card fails, the secondary card can take over.
Software Faults can also bring a server to a halt. By providing mecha-
nisms to support operation despite possible software errors and/or
failures, you increase your availability. These mechanisms can include
monitoring tools to assess system resource utilization or redundant
programs to ensure data access and manipulation.
System Level Faults occur in areas that are not computer based, such as
sensors, lights, diodes, etc. These components, although they may not be
as critical as computer hardware and software, still play an important
role in system operation. Another example of system-wide fault tolerance
would involve monitoring other network components, such as switches
or routers.
Maintaining fault tolerance is maintaining the ability to accept a fault
within one component of a subsystem without losing services to another.
That said, the primary objective in fault tolerance is eliminating any single
point of failure (SPOF). Depending on the server function, SPOFs can vary.
For example, a high-availability web server will have multiple possible Inter-
net connections, so that if the primary Internet connection fails, then a sec-
ondary and or tertiary connection can take over. In contrast, a print server
will not commonly have a web connection but could have several printer
connections and/or multiple links to remote printers.
Depending on the demands of the server at hand, redundancy offers sev-
eral possibilities. At the planning stage, it must first be decided at what level
the server needs to be available. Hot site servers, such as those in hospital and
police networks, must be available always, regardless of any possible disaster
or problem. Hot servers are the most expensive and fault tolerant. Warm
servers are designed to be fault tolerant most of the time. They contain sev-
eral redundant components, usually in what is deemed the likeliest areas of
possible faults. Warm servers cost more than cold servers but not nearly as
much as hot servers. These servers will be fault tolerant most of the time but
still can go down from time to time. Cold servers contain few if any redun-
dant components. Cold servers can and often do fail; pricewise, they are the
most affordable. What you want to do is achieve a balance between cost and
reliability. Ask yourself, “What is the use of the server? What key compo-
nents are in use and stand a chance of failure?” These are the components
that should be part of your fault tolerance plan.
Eliminating every possible SPOF and having maximum availability would
be excellent—but expensive and nearly impossible to implement. If you
think about all the possible SPOFs within a single server, this would mean
installing multiples of each key component. Then, to ensure that there would
be full system protection, you would have to use a backup server (you will
learn more about this clustering of servers later in this chapter).
Consideration must also be given to electrical requirements. An uninter-
ruptible power supply (UPS) is a must, but what if the power fails for more
then the expected battery life of the UPS? Many high-availability systems
employ an electrical generator to provide power in case of a lengthy outage.
Now you might want to provide a backup generator for the first generator.
As you can see, this becomes expensive and can get carried away very quickly.
Instead, most companies will implement a warm server.
In the real world, where the value of a dollar is taken into consideration,
redundancy focuses on the commonly used components as well as those that
might be susceptible to failure. These common components, as well as the
reason for their redundancy, are explained in the next section.
Network Cards
Network cards within a server play an integral role in the network. Without
a NIC, your server is still operational but it suddenly becomes a stand-alone
computer. All the files and resources that are shared become inaccessible to
client computers. Current network cards are, for the most part, inexpensive.
Many servers will contain two network cards; most hold several. The bene-
fits of more than one card are numerous. In terms of redundancy, more than
one connection from the server to the switch or hub provides redundant
paths. If one network card were to fail, then a secondary or tertiary card
would be available to take over the requests. This is referred to as adapter
fault tolerance. Configurations can include all network cards working
together as a team (adapter teaming) and handling requests as one, even
though there are several cards working together.
The benefits to this are obvious. First of all, should the primary card fail,
the second card can take over without any intervention. Adapter teaming
can also provide a certain level of load balancing as network requests can be
distributed evenly between the cards (in turn eliminating the possibility of a
single network card becoming a bottleneck).
Power Supplies
Redundant power supplies are becoming increasingly common within
servers. It is fairly common to have a power supply fail. Remember, the con-
version from AC (alternating current) to DC (direct current) occurs within
the computer’s power supply. Component failure and/or fan failure is a con-
cern. Servers containing multiple power supplies are configured to monitor
the primary power supply; if needed, the secondary power supply will take
over from a failing or failed primary supply. However, if the problem is poor
AC power entering the server, then redundant power supplies will not solve
your problem.
Hard Disks
Hard disks are one of the most common components to be seen in a redun-
dant configuration and one of the most common components within a server
to fail. If you can afford to provide only one area of redundancy, then it
should be the hard disks. Remember, your hard disks take the brunt of the
daily stress in a server environment. The hard disks also contain all of the
information that is stored on your network. If any other component within
the server fails, it can be changed out with nothing more than some lost pro-
ductivity time. If the hard disk fails, then all the data and information that
was stored on the disk is also lost. This is why backups are so important.
Backups allow you to restore in the event of data loss. But think of the time
it may take to perform a restore. This is lost productivity time. By imple-
menting hard disk redundancy, you can greatly reduce the time it takes to
bring a server back up in the event of hard disk failure—in some instances,
no downtime at all will be experienced. Hard disk redundancy raises a whole
new concept called RAID. We will be looking at RAID later in this chapter.
Cooling Fans
It goes without saying that cooling is critical within a computer. A server
will generate even more heat than a standard desktop, so cooling is a more
serious consideration in servers. Running several high-spin-rate hard disks,
multiple adapter cards, high-performance processors, and multiple power
supplies generates large quantities of heat. Cooling fans are available in a
variety of forms today. Dynamic fans that are dedicated to a specific com-
puter component, such as a hard disk, are available at minimal cost and are
well worth the investment.
Many server environments contain numerous fans, as well as groups of
fans working together. This form of redundancy ensures that when a fan fails
(and being a mechanical component, it will fail at some point), the failure
will not result in rapid overheating and eventual system instability. Most fan
speeds (RPMs) are also controlled. If needed, the fans can be speeded up or
slowed down to control the temperature.
Internet Connection
Where would we be without the Internet? Most businesses rely on the
Internet for their daily operations more than they realize. If daily operations
are reliant on the Internet, then precautions should be taken to ensure con-
nectivity. Precautions include providing multiple connections and different
forms of connectivity. This means that if the T1 connection fails due to issues
with the provider, then Internet access can be gained through another pro-
vider, another service (such as DSL), or—if all else fails—through the good
old dial-up modem. As much as we complain about slow modems, it is better
to have dial-up than nothing.
Clustering Technology
Reliability of network servers has become critical to the success of many
businesses. Most businesses today have resources, applications, and services
hosted on network servers that are crucial for their day-to-day operations.
This means that these resources need a high level of availability. One of the
key technologies available to meet this requirement is clustering. Clustering
servers so they operate as a single server can increase the availability of
resources, applications, and services to an impressive 99.999 percent. Not
only does it provide an economical solution for fault tolerance in the event
of server failure, but it also makes planned outages more convenient.
What Is a Cluster?
A cluster is a group of computers that work together as one and logically
appear to be a single system to users on the network (see Figure 5.1). It is a
combination of both hardware and software solutions. Clustering allows
you to link two or more systems together so that if one should fail the other
is ready to automatically assume its workload. In the event of server failure,
applications, services, and resources are migrated to a remaining cluster
member by the cluster software and are restarted.
Server 1 Server 2
C:
Shared disk
Failover
The servers in a cluster provide fault tolerance through failover and failback.
In the event of server failure, component failure, or service failure, the
What Is RAID?
A server’s disk subsystem is one of the most common components to fail.
With this in mind, you will want to implement some form of disk subsystem
fault tolerance when building a reliable network server. RAID (Redundant
Array of Independent [or Inexpensive] Disks) is a group of hard disks that
collectively acts as one storage system, providing tolerance to failure of a
disk within the array.
There are many benefits to using RAID within a server. Providing a means
for high availability within the server is always the primary objective. RAID
allows for data to be highly available regardless of disk failure. In a complex
disk array, one hard disk or more could fail and the server will still run seam-
lessly to the end users. This ability of combining multiple hard disks into a
fault tolerant array is what makes RAID so appealing to server technicians.
Another benefit is speed. Having data stored on multiple disks allows for
the disks to all write information at one time. This speeds the writing process
considerably over a single-disk system.
The main disadvantage of RAID is the cost of implementation. The cost
will vary between implementations depending on factors such as the number
of disks required, the amount of disk space required, and the level of RAID
you choose to implement. Keep in mind as well that RAID does add a level
of fault tolerance to your network server disk subsystem and data but does
not provide a 100 percent fault tolerant solution because most levels of
RAID can recover from failure of only a single disk.
There are two forms of RAID: hardware based and software based.
Hardware based RAID uses a controller card (similar to the SCSI card) that
Volume In terms of RAID, the volume is the total amount of logical disk
space within the array. For example, if you were to implement RAID 5
using four physical disks, combining 3GB of free space from each one, the
RAID volume would be 12GB (four disks × 3GB). This means that there
is 12GB of free storage space within the array. Keep in mind that this
calculation does not take into account the space needed for the parity
information.
Levels of RAID
There are numerous levels of RAID that can be implemented in a hardware
based RAID array. Each level has its own benefits and drawbacks. The most
common levels are RAID 1 (disk mirroring) and RAID 5 (stripe set with
parity). The Server + exam will test your knowledge on seven of these levels.
First let’s look at the levels in detail, and then we can compare them in a
chart to see the benefits and drawbacks.
Level 0 RAID 0 provides no fault tolerance at all. Data is split across
hard disks, resulting in fast data throughput but no safety. If a disk were to
fail, then the data would become inaccessible. RAID 0 is often referred
to as striping. Level 0 is one of the implementations of software level
RAID, and will be discussed later in the chapter.
Level 1 RAID 1 is called mirroring. In mirroring, two disks are used
and data is copied (mirrored) from one disk onto another. When one disk
fails, there is an identical second hard disk to take over. Most ordinary
servers will use RAID 1 for fault tolerance. Level 1 is a common imple-
mentation in servers and can be used with IDE disks as well as SCSI
hard disks.
Level 3 RAID level 3 is striping bits of data across several disks with
parity information stored on one disk. A major concern with this array
is that the parity disk is a SPOF. Should the parity disk fail, then the entire
array will halt. RAID level 3 requires at least three hard disks (including
the parity disk). There is also an increase in workload placed on the parity
disk because each time a write operation is performed this disk is accessed.
Level 4 RAID 4 stripes data as bytes across several disks with parity
information stored on one disk. Parity data is updated on each write
request, which can hamper performance. The same SPOF issue seen in
level 3 is also a concern in level 4. RAID level 4 also requires at least three
hard disks. The benefit over level 3 is that the data being written to the
disks is in larger units (bytes over bits).
Level 5 RAID level 5 is commonly referred to as striping with distributed
parity. Similar to levels 3 and 4, RAID 5 offers a more advanced parity,
which is striped across multiple disks. This ensures that if the parity disk
were to fail, the array would not fail. On the negative side, because parity
data must also be written to each disk in the array, performance is slower.
Level 0+1 RAID 0+1 (sometimes referred to as RAID 10) is a dual array
that takes the best of level 0 and level 1. Multiple mirror sets are used,
which are then configured in a striped set (requiring a minimum of four
disks). RAID 0+1 offers high data-transfer speed with data protection.
Level 0+5 This level of RAID is composed of multiple RAID 5 sets
connected in a single array. The benefit of this complex structure is that
multiple disks could fail across several sets and still the entire array
would stay active. The cost of such a structure would be mind-boggling.
Now that we have a better idea of each of the main RAID levels, we can
look closely at the benefits and drawbacks of each level and compare them
(see Table 5.1). It will be important for you to know these and be able to
make distinctions between levels, not only for the purposes of the exam but
also for day-to-day hands-on activity.
RAID Disks
In most applications SCSI disks are the disks of choice in a RAID config-
uration, but IDE disks can also be used. If you remember from Chapter 4,
“Storage Devices,” IDE supports only two disks per controller, and therefore
IDE is not often used within a RAID configuration. However, with software
RAID, mirroring can be implemented on one or more IDE disks. This will
allow for data protection. Normally SCSI disks are used. Many manufactur-
ers sell combination RAID and SCSI controller cards. These amazing (and
normally very costly) multi-channel devices allow you to select whether the
channels will function as RAID or SCSI controllers. Many mid-range to
high-end servers utilize these controllers.
RAID Controllers
RAID disks are just one component to consider when implementing RAID.
You will also need to consider the RAID controller if you are implementing
hardware level RAID. RAID controllers perform functions such as calculat-
ing the parity information and caching of information. RAID controllers
come with their own processor to perform parity calculations, which means
the RAID system will no longer be dependent on the CPU in the server.
In terms of caching, the controller can cache information that will cache data
that is waiting to be read from or written to the disks in the array. When
choosing a RAID controller, you need to consider the type of disk subsystem
(SCSI or IDE), the type of RAID you plan to implement (not all RAID con-
trollers support every level of RAID), and the size of the on-board cache. The
size of the on-board cache will be determined by the type of data being stored
on the array and the expected workload on the controller.
RAID Cache
Depending on the workload, a RAID controller could become a bottleneck
trying to perform all of the read/write operations. Most RAID controllers
come with on-board cache to eliminate this possibility. If the RAID control-
ler receives a request that it cannot immediately perform, the request can
be temporarily placed in the cache.
Software RAID
Some operating systems allow you to configure RAID without the need for
special hardware. The operating system will come with some utility that
will allow you to configure software level RAID, which is known as software
RAID. For example, using Disk Management, a tool included with Win-
dows 2000, you can implement software RAID through the operating
system. The previous section gave you a brief introduction to the different
levels of RAID that can be implemented. The following section will describe
in more detail the features, benefits, and drawbacks of software level RAID.
Throughout the section we will pause and examine the advantages and the
disadvantages of most of the common software RAID levels. You will find
these highlighted evaluations in shaded sidebars within this section.
RAID 0, also known as disk striping, can be implemented on a server but
it does not offer any fault tolerance. So if your servers are hosting mission
critical data, this will not be an appropriate solution.
With RAID 0, data is broken down into blocks and written across multi-
ple hard disks, which increases performance. Performance is also increased
because there is no parity overhead. However, should a disk within the array
fail, all data is lost and is only recoverable by restoring from a backup copy.
RAID 0 Advantages
RAID 0 Disadvantages
No fault tolerance—if a single disk fails, all data is lost and must be
restored from backup.
Because RAID 0 offers no fault tolerance, it should never be used for data that
is mission critical.
RAID level 1 is also known as disk mirroring. This is one of the most
common implementations of RAID in a server environment. With disk
mirroring, two disks are required so that data from one disk can be copied
or mirrored onto a second disk. Each time a write is made, it is duplicated to
the second disk in the mirrored set. If the first disk fails, the data can be
accessed from the second disk in the mirrored set (see Figure 5.2).
Disk 1 Disk 2
C: C:'
Disk Controller
RAID level 1 has little impact in terms of performance. You will not
see any increase in performance when reading from the disk and you may see
a decrease in performance for disk writes because the data now has to be
written to two different disks.
In terms of fault tolerance, RAID level 1 can withstand the failure of one
disk without data loss. This implementation however cannot withstand
the loss of a second disk, so when disk failure occurs, it is important to
replace the disk as quickly as possible.
A variation of RAID 1 is disk duplexing. It is similar to disk mirroring but
provides an additional level of fault tolerance. With disk mirroring, the disks
in the array use the same disk controller. Should the disk controller fail, both
disks fail as well. With disk duplexing, each of the hard disks has a separate
controller, adding yet another level of fault tolerance.
RAID 1 Advantages
RAID 1 Disadvantage
Some implementations of RAID allow you to add a hot spare, which can be
used if one of the disks in the array fails.
The second most common level of RAID is RAID level 5, also known as
striping with parity. It requires a minimum of 3 disks and most implementations
Level 5 Advantages
RAID 5 Disadvantages
You can implement different levels of RAID on a single server. For example
you may choose to use RAID 1 for the system and boot partition while imple-
menting RAID 5 for data.
Unlike hardware level RAID, software level RAID supports both SCSI and IDE
disks in a single array.
Not long ago I did some pro bono consulting for a small independent school
in Massachusetts, where the administration had become concerned about
data security. Wisely perhaps, the school’s administrative network was
designed to be completely separate from the student labs, so I was spared
from dealing with dozens of wickedly clever teenage saboteurs. Instead, I
had only to deal with one cost-conscious head of school. The administrative
network was running Windows NT on a server with five physical hard disks.
The head wanted me to set up the server so that if one of the disks failed, the
data would not be lost. Oh yes…and it had to cost nothing.
Hardware Level RAID was clearly out of the question unless I could con-
vince somebody’s grandmother to donate a RAID controller. But I had the
solution. By applying software level RAID through an existing NT utility, I
implemented disk striping with parity (also known as software level
RAID 5). Software RAID relies on the operating system to control disk reads
and writes, so the processor takes a bit of a hit and some disk space is
lost to parity information; luckily, these were not major concerns on this
network. In fact, one of the math instructors reported a slight increase in
performance, probably due to faster reads from the multiple disks. I had
delivered a good level of fault tolerance: If one disk in the five-disk array
failed, data could be recovered through stored parity information.
When I explained all this to the head, she graciously thanked me and
assured me that my work was worth every penny.
Hot Plug
Hot plug is an amazing technology that allows you to add disks to a server
while it is running. This technology calls for some special components but is
well worth the added expense. Servers are built with a backplane and rail sys-
tem that allows for the disks to slide in. Once fully inserted the disks come into
Hot Spare
In RAID configurations it is often advisable to have a hot spare on hand. Hot
spares are extra hard disks (matching those in use within the server), which
can be installed when needed. By having a hot spare on hand, you can be
assured that if and when a disk fails within the server, a replacement can be
installed and integrated into the RAID configuration as quickly as possible.
Unfortunately the hot spare idea is not an effective use of money. There is
a possibility that you may never use the hot spare disk and it will sit in a
storage cabinet until it becomes obsolete.
With a RAID 1 configuration having a hot spare makes sense. After all, in
this implementation of RAID, you only have two disks to work with. When
one fails, you are down to only one disk—and if you recall, RAID 1 can only
recover from a single disk failure. If you then need to send out the failed disk
for servicing and wait for it to return, you suddenly are spending an extended
period of time with no data protection.
Summary
In this chapter we discussed server fault tolerance and the different
options that can be used to increase server availability. Providing redundant
components is one of the simplest ways to avoid server downtime.
To provide for complete server fault tolerance, not just fault tolerance of
individual components, servers can be set up in a cluster configuration. This
provides a high level of availability for servers as well as the applications and
services they are hosting. In a cluster configuration, two or more servers
operate as one—should one server fail, another is ready to automatically
assume its workload.
The disk subsystem is one of the most common components to cause
server failure. Providing redundancy and fault tolerance for data stored on
a server’s hard disk can be accomplished by implementing some form of
RAID, using the operating system or specialized hardware.
Exam Essentials
Recognize the three general categories of server faults. Faults that can
occur in a server environment include hardware faults, software faults,
and system-level faults.
Common redundant components. There are many ways to provide
server fault tolerance. One way is to implement common redundant
components. Due to their importance in a server environment, you should
consider implementing redundancy for the following components: Net-
work interface cards, power supplies, processors, cooling fans, and
hard disks.
Understand how clustering provides fault tolerance. To provide fault
tolerance for a server and the applications, services, and data it is hosting,
you can implement clustering technology. With clustering, two or more
servers act as one. If one server fails, another server is ready to auto-
matically assume its workload. Clustering provides fault tolerance,
scalability, and load balancing.
Understand RAID. RAID, or redundant array of inexpensive disks, is a
group of hard disks that collectively act as one storage system to provide
fault tolerance for a server’s disk subsystem. RAID can be implemented
through specialized hardware or through the operating system.
Understand the commonly used levels of RAID. The most commonly
used levels of RAID are 1 and 5. RAID level 1, also known as disk mir-
roring, takes data from one disk and mirrors in onto another disk. RAID
level 5, also known as disk striping with parity, writes data across multi-
ple disks and uses parity information to re-create the missing data in the
event of disk failure.
Understand Hot Plug and Hot Spare. With the use of specialized
components, hot plug allows you to add disks to a server while it is
still running. Hot spares are extra hard disks (matching those in use
within the server), which can be installed when needed.
Key Terms
B efore you take the exam, be certain you are familiar with the
following terms:
Review Questions
1. What is the minimum number of disks needed to implement RAID
level 5?
A. 2
B. 3
C. 4
D. 1
B. RAID 1+5
C. RAID 0+1
D. RAID 5
A. RAID 0
B. RAID 1
C. RAID 3
D. RAID 5
5. You create a RAID 5 array that consists of two 10GB disks, a 20GB
disk, and a 40GB disk. What is the total volume space for the array?
A. 0GB
B. 80GB
C. 40GB
D. 50GB
6. You create a RAID 5 array that consists of two 10GB disks, a 20GB
disk, and a 40GB disk. What is the total amount of space available for
storing data?
A. 40GB
B. 60GB
C. 20GB
D. 30GB
7. Which of the following levels of RAID has the lowest disk overhead?
A. RAID 0
B. RAID 1
C. RAID 3
D. RAID 5
E. RAID 10
A. Both SCSI and IDE disks can be used in the same array.
C. Clustering
D. Raid 0+1
B. Processors
C. Servers
D. Hard disks
E. RAID systems
11. You set up two servers in a cluster configuration. Each server has its
own workload—one is running a database program while the other
is running a mail service. Each server is also ready to assume the
other’s workload in the event of failure. What type of cluster config-
uration is this?
A. Active/spare
B. Active/passive
C. Passive/passive
D. Active/active
13. Which of the following solutions combines data striping across disks
with mirroring?
A. RAID 1+5
D. RAID 0+1
14. Your boss has asked you to implement hardware level RAID because
he understands that it is more reliable. He wants your opinion. What
will you tell him? (Select two.)
A. It is not necessarily more reliable but it does provide better
performance.
B. Software level RAID is less expensive but provides better perfor-
mance because the process is controlled by the network operating
system.
C. Hardware RAID will cost more because he will have to purchase
a special controller and disks.
D. Software RAID will end up costing more because a special version
of the operating system has to be purchased.
B. In nibbles
C. In bytes
D. In binary
E. In blocks
A. Yes.
B. No.
17. You want a RAID solution for your server that will give you redun-
dancy in the event the disk itself or its controller fails. Which RAID
level will you choose?
A. RAID 5
C. RAID 1
D. RAID 0
18. Your boss wants you to implement a level of RAID but he does not
want to incur any additional cost. You are running Windows 2000
on a server that has 4 physical hard disks. He understands that you
can set the RAID up in such a way that if one of those disks fails, the
data will not be lost. What are you going to implement? (Select all
that apply.)
A. Hardware level RAID
C. Disk striping
19. Your company is located in a remote area and the nearest vendor who
provides computer repair services is over 500 miles away. Which is the
best solution in a situation where a hard disk or a computer sustains
a fatal hardware failure?
D. Back up your data every night so you can still gain access to the
corporate data.
20. You wish to implement a RAID solution where data will be striped
across disks because you want the speed associated with data striping
on disk reads. However, you are concerned that if you lose multiple
disks, you will lose all the data on all disks. How can you protect your-
self against loss of multiple disks?
A. RAID 1+5
D. RAID 0+1
13. D. RAID 0+1 is a hybrid approach where an entire stripe set without
parity is actually mirrored or duplexed.
14. A, C. Hardware RAID costs more because of the special controller
and disks that need to be purchased but it provides significantly better
performance than Software RAID. In addition it will support hot
swap of disks that fail.
15. E. RAID 5 data is striped at block level across all of the disks in
the chain.
16. A. Whether a disk is hot swappable has nothing to do with its status
as a hot spare.
17. C. Disk duplexing adds a second controller to the second disk, moving
the single point of failure away from the disk subsystem to the main-
board. In RAID 1, if either disk fails, the other disk takes over.
18. B, D. This is called disk striping with parity and can be implemented
via the network operating system. Therefore, it is software level RAID.
You do not need to purchase any additional hardware or software.
19. B. Hot spares can be replaced while the server is down for a minimal
time or the hot spares can be hot plug or hot swappable types.
20. D. RAID 0 is striping without parity, so it will give you the perfor-
mance you are looking for because it doesn’t have to calculate parity.
However, if you lose one disk, you will lose all data. If you use the
hybrid RAID 0+1, the 1 means that the disks will also be mirrored or
duplexed. Therefore even if you lose two or more disks, you will still
be able to get the data back from the mirrored disk.
Notice how there are special devices at the beginning and end of the bus
wire. These are actually 50-ohm terminators. Terminators are installed at
the beginning and the end of a bus network to eliminate signal bounce. If
a terminator is not present, is not working, or is not connected properly,
network signals will reach the end of the wire and bounce back (much like
an echo). With time, the entire bandwidth of the wire will be consumed
with signal bounce, thus bringing the network down. Bus connectors,
terminators, cabling, and cable standards will be discussed in more detail
later in this chapter.
Star A star topology is based around a central device such as a hub
or switch. Every computer is connected through its own cable directly
to this central device. This is an improvement over the bus topology, as
a cable failure will bring down only the one computer directly connected
to it. Star topologies are the most commonly used topology in small- to
medium-sized business environments today. Benefits of the star topology
are ease of installation and troubleshooting. Most central connectivity
devices have lights to indicate whether a cable segment is active or down.
Upgrading to add more devices to the network can be done without shut-
ting down the entire network. Figure 6.2 is an example of a star topology.
A major concern with the star topology is the single point of failure
(SPOF). Having all network resources connected to one hub or switch
makes it a key component. Extra care should be taken in locating this
device in a safe and secure location. Another consideration is that, with
each device needing its own dedicated cable, there can be a multitude of
cables merging at the point of the central connectivity device. Cable
management becomes a focus. The cable used within a star topology
needs to be carefully routed throughout a building to avoid areas of
possible EMI (electromagnetic interference). As the number of cables
increases, this task can become increasingly difficult. Fortunately the
cost of cable used in a star topology has dropped, making it an afford-
able option. If longer cable runs need to be made to avoid possible EMI
interference, it will not be a major financial stress.
Ring The ring topology features all devices connected in a circular for-
mation. There are two different ring topologies to be aware of: logical and
physical. Logical ring topologies move information in a circular ring for-
mat but are physically a star topology. What this means is that there is a
central connectivity device and all other devices connect to this central
device thorough dedicated cables—it looks like a star topology. However,
information flows in a circular format throughout this network. Physical
ring topologies actually look like a ring. All networked devices are con-
nected in a circle. The advantage of a physical ring is that there is very
little cable in use, making installation easier. Since fiber optic networks
work on the principle of a ring, another possible advantage is speed. A
possible disadvantage of some ring topologies is that a single failure in
either the computer or in the cable can result in the entire network failing.
This is not the case in a true ring topology such as FDDI (fiber distributed data
interface). FDDI (described in more detail below in the “Fiber Optics” section)
“implements a dual ring so that it remains functional should one station die
or drop off the ring.”
paths that could be taken to reach the target resource. Although this
may seem like the ideal situation, mesh topologies can be a wiring night-
mare. The quantity and complexity of wires can become overwhelming.
Mesh topologies use by far the most cable and are the most complex to
install and troubleshoot. It is rare to see a mesh topology in use today.
Figure 6.4 is an example of a mesh topology.
Servers
Backbone
Hub
Client
Computers
802.3
This standard defines a bus topology, using a 50-ohm coaxial baseband
cable with a transmission speed of 10Mbps. This was the original specific-
ation of Ethernet. It used CSMA/CD (Carrier Sense Multiple Access with
Collision Detection) to put data on the cable. CSMA/CD monitors the cable
for data traffic. When it senses that there is no traffic, it will attempt to send
data packets. If a collision occurs, then it will pause for a random period of
time and then attempt to retransmit. The problem with this type of network
is that the larger the number of clients and resources on the network, the
greater the number of collisions. This leads to slower network speeds.
Several new releases to this standard have emerged, providing new
cable options, connectors, and speeds of 100Mbps—and now 1000Mbps.
These newer standards of Ethernet will be discussed in detail later in this
chapter.
802.5
Token Ring is another standard (based on the IBM PC Token Ring standard)
that has been frequently used. This standard specifies a physical star/logical
ring topology using twisted-pair wire. A special data carrier, called a token,
circulates through the ring from computer to computer picking up data
packets and delivering them to the destination. Each computer acts as a
repeater, boosting the signal of the token so it can travel to the next com-
puter. Only one computer can control the token at a time. When the data
packet and token reach their destination, the token unloads the packet
and takes a successful-reception acknowledgement to the sending computer.
If the sending computer has no more packets to transmit, the token then
becomes available to the network and the next computer waiting to trans-
mit. The advantage of this method of data transmission over the Ethernet
is that there are no collisions. Although there is only one token on the ring,
a Token Ring network can reach hundreds of systems and still perform
adequately.
802.8
Fiber optics has taken on a strong role within the computer network envi-
ronment. Traditional hybrid networks used a coaxial cable backbone for
data transmission. This proved to be limiting in terms of transmission speed
as well as signal degeneration due to outside interference. Fiber cabling is
now actively replacing the coaxial cable as a backbone. The fiber allows
for faster transmission speeds, as well as immunity to EMI (electromagnetic
interference) and RFI (radio frequency interference). There are numerous
releases of fiber cabling today, because the best choices and applications are
still being identified. Fiber optics and fiber connectors will be discussed later
in this chapter.
802.11D
Wireless technology has become the latest craze over the last few years. With
transmission speed and range increasing, wireless becomes an attractive
alternative to network environments that physically change on a regular
basis. Consider, for example, a network that needs to be set up in a historical
building, where drilling holes in walls and running cables is rarely an accept-
able practice. A wireless LAN is a much more acceptable option. Without
the installation of cables, a wireless network can be installed in a matter of
minutes. Wireless can also be integrated within an existing wired network to
provide a new breed of hybrid networks. This is often seen in environments
where laptop computers are used. Due to their nomadic nature, laptops can
remain connected to the network no matter where in the building they go.
Wireless technology is continuing to develop and improve. Handheld devices
also use this technology to communicate with host computers.
Wireless technology today uses an access point that sends and receives sig-
nals from the wireless devices. This access point in turn is wired to the net-
work (usually the switch or hub). Signaling methods on a wireless network
can include infrared, laser, narrow band radio, and spread spectrum radio.
Spread spectrum radio is often preferred to the other methods because the
price is reasonable but also because it does not require line of sight like
the laser technology does. Spread spectrum radio also can travel through
some walls, providing many options for the ever-changing network. Current
transmission speeds for wireless are in the 11Mbps range but steadily
increasing. Concerns with data security are also being addressed. If you are
broadcasting your network information over a radio frequency, it can be
captured by other devices. Wireless technology is still breaking new ground
in its development, both in speed and data security. With time, it will defi-
nitely become a major contender in the LAN arena. Right now though, it is
commonly used for small portable networks that regularly change location
and/or position.
OSI Model
The Open System Interconnection Model is a theoretical seven-layer model
designed to illustrate the flow of information through a network. This
model was designed not only for aspiring network technicians to get a
better understanding of network communications, but also for technology
manufacturers to be able to dissect the elements and processes of informa-
tion flow and development, thus assisting in project development within
their own research departments. For the Server+ Exam you will need to
know the layers of the OSI Model as well as the functions that occur at
each layer.
Application Layer The application layer (layer 7) is at the top of the OSI
Model. At this layer, file and print services operate. This layer controls
data flow and error recovery.
Presentation Layer The presentation layer is responsible for the format
of data, network security, protocol conversion, data compression,
encryption, and translation.
Session Layer The session layer is responsible for establishing, main-
taining, and terminating communication sessions. These sessions are
often called virtual conversations. The session layer identifies passwords,
logons, network monitoring, and recovery from network failures.
Transport Layer The transport layer is responsible for error-free data
frames. It controls data flow and reliable end-to-end communication.
Network Layer This layer translates logical (TCP/IP) addresses into
physical (media access control, or MAC) addresses. The network layer
also determines the best path for information to travel on the network
if multiple paths exist.
Data Link Layer The data link layer is subdivided into two sublayers:
the MAC layer and the LLC (logical link control) layer. The data link
layer arranges data chunks into frames and organizes the frames into
a data stream, marking the beginning and end.
Physical Layer The physical layer, layer 1, is at the bottom of the OSI
Model. This layer describes how data is transmitted on the network cable
(media), including digital, optical, and mechanical interfaces.
Ethernet
Of all the numerous standards and network structures, Ethernet has stood
out as the most popular network implementation. Ethernet has evolved
over the years to include several different cable types and topologies, as
previously mentioned. Through this evolution, improvements have been
made in reliability and speed.
Coaxial Based Ethernet The original Ethernet implementation was
based around the bus topology mentioned earlier. Although there were
several disadvantages to the original Ethernet (bus topology based), it was
affordable and easy to install. The cable of choice during this time was
coaxial cable. Coaxial cable consists of a center wire (usually copper)
surrounded by an inner layer of insulation, then a mesh or foil shielding,
and finally a thick outer PVC layer for protection. Figure 6.6 is an
example of coaxial cable.
There are two common forms of coaxial cable in use with Ethernet:
Thicknet or 10Base5, and Thinnet or 10Base2. Thicknet cable is normally
used for a network backbone, as seen in the hybrid example (refer back to
Figure 6.5). This cable is very difficult to work with due to its thickness.
Many installers refer to it as the frozen garden hose. Trying to bend the cable
Cable Nomenclature
In the computer realm there has never been a shortage of acronyms, abbre-
viations, and epithets. Networking is no exception. Deciphering this jargon
can sometimes be an overwhelming task. In the example of 10Base2, key
information is presented in the name.
This makes sense until you reach 10BaseT. Now there is a letter instead of
a number representing the maximum cable distance. With this standard the
letter represents twisted-pair wiring. This includes both unshielded twisted-
pair and shielded twisted-pair (both of these will be discussed in detail later in
this chapter). Next comes the 100BaseF. The F, as you might have figured out
already, stands for fiber. This implementation would therefore provide
100Mbps transfer speed over baseband signaling with fiber cable.
Thinnet cable is much more flexible and is used for bus topologies and
cable runs from the backbone to the computer. Thinnet wire is classified as
RG58U cable. It can transfer data at a distance of 185 meters with a speed
of 10Mbps. Thinnet uses a BNC connector that attaches to each device on
the bus network. Figure 6.7 is an example of a BNC connector.
Coaxial Cable
BNC T Connector
50-ohm Terminator
Network Card
Twisted-Pair Ethernet
Since that original implementation, Ethernet has expanded to include new
devices, cabling, and speeds. Current Ethernet is based on twisted-pair
cabling. Twisted-pair cable has opened the door to a whole new level of
Ethernet technology. Improved transmission speed and flexibility in instal-
lation, as well as new devices, have catapulted this media to the most popular
in use today. Figure 6.9 is an example of a section of twisted-pair media.
Notice how the figure shows a solid-shaded wire and a striped wire
twisted together in a pair, and that there are four pairs of wires. The wires
are twisted together to help prevent signal interference. Signal interference,
either from another set of wires or from other devices (such as fluorescent
lighting) can affect performance on data wires. Twisted-pair cable is labeled
by category. Table 6.1 lists the common categories of twisted-pair cables and
their uses.
Category Specifications
Plastic Edge
Gold Connectors
Notice how the connector is pressed down on the outer layer of the UTP
cable. This process of installing the cable into the connector and securing it
is called crimping. Crimping requires a special tool called a crimper. The end
of the UTP cable is cut, and the PVC outer layer is trimmed back. Next, the
individual wires within the UTP are carefully rearranged into the proper
order (to match the standard being used). The wires are then slipped into the
RJ-45 connector, and the crimper forces small gold connectors into the ends
of the wire and also pinches a plastic edge onto the PVC jacket to hold the
end onto the wire. It is imperative that the plastic edge contacts the PVC
jacket and not the wire. Otherwise, the twisted-pairs could be damaged or,
over time, the end could slip off.
STP connectors are identical to UTP connectors except for one added feature:
a metal shield. The exterior of the RJ-45 connector has a metal shield that
connects to the metal shielding of the STP wire. This ensures proper grounding
of the shielding throughout the entire length of the cable.
A Standard B Standard
Green Orange
Blue Blue
Orange Green
Brown Brown
Network cable is often housed in plenum spaces, which are those areas
(usually above the ceiling or under the floor) used to circulate air. Running
cable in these areas poses a hazard in the event of fire because the cable can
give off toxic gases if it burns. Both twisted-pair and coaxial cable come in
plenum versions—these are coated with a fire-retardant material (usually
Teflon).
Most networks today are either running UTP cabling or changing from
coaxial to UTP cabling. The benefits of a UTP network, combined with the
ease of installation and troubleshooting, make it a smart choice.
Gigabit Ethernet
The latest Ethernet standard, which is becoming mainstreamed, is gigabit
Ethernet. These extremely fast implementations of Ethernet support data
transfer rate at 1,000Mbps. Currently hardware supportive of this standard
(including switches and network cards) is still very expensive. Implementa-
tion of gigabit Ethernet is often limited to backbones and server-to-server
connections.
Nearly all network cable sold today meets current fire and health codes.
However, some older cables may not. Before you upgrade or restructure
your network, be sure to check with your local building codes. Cable that
will be housed in plenum areas must be plenum-rated (must not release
toxic fumes when burned). Cable that is not plenum-rated is not acceptable
to use today.
Fiber Optics
The latest technology is the use of fiber optic cabling. The benefits of fiber
cable include speed, transmission distance, and immunity to EMI and RFI.
Outer sheathing
Inner sheathing
Glass cable
Notice how the inner glass core is surrounded by several protective layers.
This is to ensure cable safety.
Fiber transmission methods fall under two categories: single mode and
multimode. Multimode uses light emitting diodes (LEDs) to create the signals.
The light produced by the LEDs contains various wavelengths. The diodes
shine light at the fiber, with some of the light wavelengths entering the fiber
and others not. The amount of light actually being used is not efficient and
therefore limits the distance that the signals can be sent. Multimode transmis-
sion fiber optics is the least expensive fiber option. Single mode transmission
relies on a laser as the light source. Laser light is one wavelength, and the
cable is matched with the laser to allow for maximum transmission. Due to
the purity of the laser light, single mode fiber implementations can achieve
transmission distances of 58 kilometers and are often used for long distance
connections.
Fiber cable is measured in microns. Fiber cable measurements are given
with two numbers—the first number is the diameter of the fiber strand and
the second number is the thickness of the cladding. These two numbers are
separated by a slash. The signaling method is also supplied. For example, a
62.5/12.5 multimode cable would contain a 62.5-micron fiber cable with
a 12.5-micron sheathing used only with a multimode signaling method.
Network Devices
Besides topologies and cabling, there must be interface devices within a net-
work. These devices provide network connectivity. Each device has its own
specific use within LAN and WAN environments.
For more information on network adapter cards refer back to Chapter 5, “Fault
Tolerance and Redundancy.”
RJ45 Ports
Bridge Bridges are interesting devices that allow you to virtually divide
your network into two separate LANs. Bridges can be used to connect
LAN segments that do not use the same media type. The benefit of imple-
menting a bridge is to decrease network traffic. When the bridge receives
a packet, it can determine which LAN segment the packet is destined for
and forward the packet to that segment and no others. It forwards packets
based on layer 2 addressing (MAC addresses). Bridges allow for specific
traffic to cross between the two virtual networks. For example, a printer
could be configured to work with both networks. Data packets addressed
to the printer will be acknowledged by the bridge and allowed to cross
over to the other network. In today’s networks, bridges are not often
used; switches and routers have become more common.
Switch A switch is often referred to as an intelligent hub. The benefit of
a switch over a hub is that a switch will read the information coming
inbound and, based on the address located in the data header, the switch
will send the information out on the receiving addressed port. This elim-
inates the network-wide propagation that occurs with a hub. Efficiency
with a switch is often the justification for the extra expense over a hub.
Switches are the connectivity device of choice today in both small and
large networks. Some high-end switches allow for monitoring data traffic
as well as creating VLANs; these are called layer 3 switches and are used
in high-traffic large networks.
Router Routers are intelligent devices that, if given multiple choices,
will select the best path for sending data between networks. WANs often
use routers to connect between locations. Routers can be complicated
devices to set up and maintain.
Routers are responsible for receiving packets and determining the best path
to reach the destination host using layer 3 addressing. Therefore, routers
can only be implemented if the LAN protocol (e.g., IP and IPX) supports
routing. It uses the logical addressing information (such as the TCP/IP
address) within the packet header to determine where to route the packet.
Routers maintain routing tables that contain information about destina-
tion networks. When a packet is received, the router uses the information
in the routing table to determine the best path to send the packet to reach
the destination network. The packet may be forwarded directly to the
destination network or to another router.
Gateway A gateway allows communication between different network
architectures and network environments. It will translate between proto-
cols and allow for dissimilar networks to be connected together and to
communicate.
Piece of Cake
Network Installation
Now that we have a clear understanding of each network component, we
can start putting it all together. There is a logical process of steps from the
planning stage to the finished, up-and-running network.
To begin with, you have to determine which network is going to best
suit your needs. How large is your network? What are the software and
hardware uses of the network? Where will the computers be located? What
is the maximum distance that your network will need to stretch to? Will the
network grow with time? What are the future plans for the business in terms
of software and hardware requirements?
Most small- to medium-sized networks today will install an Ethernet net-
work based on a star topology with a switch running at 100Mbps. This
network will use category five UTP cable and might or might not have a
patch panel attached to a rack. Begin with deciding on the server’s location.
This might be a dedicated room, closet space, someone’s office in the
back corner, or other possible locations. As you will learn in Chapter 13,
“Managing and Securing the Server Environment,” there are numerous
environmental variables to take into consideration when selecting the loca-
tion of your server and networking equipment. Normally the connectivity
devices and server are located within the same setting. The ideal environment
would be within a rack, both for security and to provide a stable mounting
surface.
The first step is the wiring. If you are replacing an existing network, you
can often follow the same wire paths that were in use before. However, if
you are installing a new network, then care must be taken when running the
main cable lengths to ensure that they are routed away from sources of EMI
or RFI. Care when crossing electrical wires must also be taken. It is advisable
to cross electrical wires at 90-degree angles rather than running the UTP
parallel to the electrical wire. When running your network wires, be sure to
label each wire with its source location. Remember that at the switch end
there will be a wire for each computer. If you do not label each wire with its
source, you will have no idea which wire belongs to which location. Special
numbered stickers and tape are available, but they can easily fall off. A
permanent felt marker ensures that your labeling system will not become lost
over time. In small networks that don’t change, you can label each wire with
the user’s name. However, in a larger network or an office that changes staff
regularly, this system is not advisable—for obvious reasons.
Once all wire has been run, the ends can be crimped or attached to the
keystone connectors. It is highly advisable to attach keystone connectors and
wall plates to the UTP at the user’s computer end. This not only creates
a much cleaner installation, but also protects the connection: The cable
between the wall and the back of the user’s computer can, and often does,
take abuse. Becoming tangled in feet under the desk, tension from the com-
puter being moved, and danger from cleaning staff and vacuums are just a
few of the possible dangers that this cable can encounter.
By installing the wall plate and keystone connector, you then use a short
patch cable from the wall plate to the back of the computer. If the cable
becomes damaged, it can be readily replaced with a spare cable, with no
stress or damage to the main cable run. At this time, the UTP can also be
attached to the back of the patch panel. These cables, just like at the keystone
connector side, are attached with a punch down tool. This tool is really a thin
blade that forces the strands of cable, by their color codes, into small gold
connectors that provide the connectivity. Once completed, the labels for the
wires can be transferred to the front of the patch panel and the patch panel
can be installed into the rack or on a wall to ensure that there is no further
stress on the main cable runs.
Now that all of the cables have been run, the next step is to test the cables.
Cable testers come in a variety of forms. The simplest ones test for basic con-
nectivity. A transmitter connects at one side of the UTP through the RJ-45
connector and a receiver connects at the other end. The transmitters send
electrical pulses down each strand of cable within the UTP and the receiver
lights for each wire. If there is a fault with crimping, you can determine
which wire is failing. The most expensive testers will provide the same
function but also test for signal degradation, speed of transfer, and numer-
ous other elements; these testers will also provide a printout of the test
results. Since these testers are expensive, they are often rented.
When you are testing network cables, ensure that there are no devices
attached to the other end of the cable. The electrical impulses sent by the
testers can transmit enough charge to damage your expensive network
components.
Once the cables have been installed and tested, the connectivity device can
be installed. Depending on your budget and network size, you will have to
choose either a hub or a switch. As previously mentioned, hubs will propa-
gate information through every wire on the network and can lead to slow
network performance. If your network has more than 12 users and tends
to send and receive a large amount of network traffic, then you will want to
purchase a switch. Whether you buy a hub or a switch, you must select
carefully. Many hubs and switches allow for stacking. Stacking is the ability
to link devices together. So, should your network grow beyond the number
of available ports on your switch or hub, you can simply buy another hub or
switch and link it to the first one. However, when purchasing the initial hub
or switch, you should have a clear idea of the number of current and future
computers that will be connecting to the network. Then purchase a hub or
switch that will meet this need.
Switches and hubs commonly come with 4, 8, 16, or 24 ports. If you need
13 ports, then you will have to purchase a 16-port switch. Once you have
decided on a switch or hub that will meet your needs, it can be installed into
the network. If you are using a rack system, the switch or hub will be
installed below the patch panel. This will ensure that the wires from the
patch panel to the switch will not interfere with other devices installed in
the rack. When installing any components in the rack, care should be taken
with electrostatic discharge. ESD can cause damage to components even if
you don’t open the case. Use an ESD wrist strap or other suitable ground
methods to ensure that you are not going to cause a potential fault in your
new components.
Finally the network cards can be installed and configured. If your com-
puter does not have a network card installed, the first step is to perform a
safe shutdown of the server and take the cover off the computer and examine
the available expansion busses. Hopefully you have a free PCI slot and can
then purchase a PCI network card. Since you are installing a 100Mbps net-
work, the card would have to support the 100Mbps transfer speed. While
wearing an ESD wrist strap, you can then install the PCI network card. If you
are using a PCI card and Windows, the card should be Plug and Play so that
the operating system will identify the network card when you turn the com-
puter back on. Insert the manufacturer-supplied disk when prompted and
the installation of the network card will be completed.
Connect patch cables between the computers and the wall plates, and
between the patch panel and switch. (Remember to test your patch cables
before installing them.) Connect patch cables to the servers network cards
and the switch.
The network hardware is now installed. Next comes the operating system
and software configuration. This will be covered in detail in Chapter 7,
“Network Operating Systems,” and Chapter 8, “TCP/IP.”
Summary
T his chapter began with an exploration of major network types. The
most common types of networks seen today include the LAN, WAN, SAN,
VLAN, VPN, and WLAN. Network topologies are the physical layouts that
a network can have. This includes the bus, star, ring, mesh, and hybrid.
Networks are governed by standards to ensure compatibility between
vendors. IEEE created the 802 standards to subdivide the various network
areas into 17 different areas. Of these the most commonly implemented ones
are 802.3 Ethernet, 802.5 Token Ring, 802.8 Fiber Optics, 802.11 Wireless.
The OSI Model is a layered approach to understanding the flow of infor-
mation through a network. The layers from top to bottom are Application,
Presentation, Session, Transport, Network, Data Link, and Physical.
Of the various networks available, Ethernet is the most common imple-
mentation in use today. Initially a bus topology using coaxial cable and
transmitting at 10Mbps, Ethernet has grown to a star topology and uses
Exam Essentials
Know the different types of networks. Be able to identify the differ-
ences between LAN, WAN, and MAN networks.
Know the different network topologies. Identify and understand
the benefits and drawbacks of the bus, star, ring, mesh, and hybrid
topologies.
Be able to identify the IEEE 802 standards. Pay special attention to
802.3, 802.5, 802.8, and 802.11. Know the details of each of these four
levels. Be familiar with identifying all the 802 standards by function.
Know the layers of the OSI Model. Be able to list, in the proper order,
the layers and functions of the OSI Model.
Know the Ethernet standards. Make sure to understand the differences
in Ethernet evolution from coaxial bus topologies to UTP star topologies.
Understand how a bus network works. Be familiar with Thinnet,
Thicknet, backbone cables, terminators, tee connectors, and the bus
layout.
Understand how a star network works. Be familiar with UTP, STP,
central connectivity devices, crimping, and patch cables.
Key Terms
B efore you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. What are networks?
A. Monitors
B. Files
C. Printers
D. Internet connections
A. LAN
B. WAN
C. MAN
D. VLAN
A. Star
B. Ring
C. Bus
D. Mesh
A. 802.5
B. 802.11
C. 802.3
D. 802.7
B. 802.3
C. 802.11
D. 802.7
A. 802.3
B. 802.5
C. 802.7
D. 802.11
10. Which layer of the OSI Model is responsible for logical (TCP/IP)
addressing?
A. Network
B. Transport
C. Physical
D. Data link
11. Which layer of the OSI Model is responsible for describing how data
is sent on the network media?
A. Transport
B. Physical
C. Network
D. Session
B. 10BaseF
C. 10BaseT
D. 10Base2
A. 100 meters
B. 185 meters
C. 200 meters
D. 500 meters
A. Two
B. Four
C. Six
D. Eight
A. 10Mbps
B. 10,000Mbps
C. 1,000Mbps
D. 16Mbps
A. 100 meters
B. 185 meters
C. 200 meters
D. 500 meters
A. Laser
B. LED
C. Natural light
D. Impulse
18. What is the light source for single mode fiber optics?
A. Laser
B. LED
C. Natural light
D. Impulse
C. Switches
D. Routers
NOS Options
W hen getting ready to purchase any software for your server, a
number of considerations come into play. When purchasing the NOS,
these choices become even more complex.
In choosing your network OS, you are also determining which applica-
tions you will be able to purchase, how your network will be managed, and
In this book, we will present you with the data needed to answer the factual
elements of the exam, and we will also give you the mental framework from
within which you can make the logical decisions that are such a big part of the
test. As you answer the questions, though, remember that the question is not
asking you how you think something should be done—it is asking how you
think the majority of experts would recommend it be done.
Application Compatibility
Application compatibility concerns are generally pretty straightforward.
Usually a particular software package is either written to run on a particular
OS, or it isn’t. Even so, there are times when the software is only compatible
with certain versions of a NOS, or where the application is more full-
featured on one platform than on another.
The simple fact is that the easy way out here is to pick Windows 2000
Server as your NOS, because just about anything you could need is written
for the Windows OS family. That doesn’t make Windows the automatic
choice for compatibility, though. At times, security, stability, or other key
elements of an operating system influence your choice. For example, e-mail
servers often use a Unix flavor for their operating system. With the mul-
titude of viruses on the Internet, and e-mail being the most common
transport method for a virus, Unix operating systems provide the best
tolerance to virus infections.
Other applications are designed to operate within a specific operating
system environment. This is in part due to resource management, but more
commonly due to shared files such as drivers. The best way to eliminate
possible problems is to clearly research the programs and software that you
are installing to ensure that they will support installation on the operating
system you are using as well as the version. Each NOS manufacturer has
released several versions and update patches. Each change will result in an
impact on the software that you wish to run on the server. This also raises a
concern when you decide to perform an operating system upgrade or patch.
Consideration of the applications installed must be taken.
Hardware Requirements
Not all operating systems are the same—most obviously in their installation
and user interface, but also in their hardware requirements. Hardware
requirements include both physical hardware installed within the server as
well as available resources for operation (such as RAM and virtual memory).
Selecting an operating system that will best meet the hardware requirements
(or vice versa) is an important planning step.
Novell NetWare
The latest Novell NetWare release is NetWare 6. The following hardware
requirements are the minimum installation and operation requirements for
NetWare 6:
Intel Pentium II or higher processor
Windows NT
Windows NT preceded Windows 2000. Hardware requirements between
the two operating systems remained similar, with Windows 2000 requiring
a bit more in the resource area. The new .NET server (Windows Whistler),
due to be released later in this year, will continue the same trend. The fol-
lowing requirements should continue to serve:
Intel 80486 processor or higher
VGA display
125MB free hard drive space
16MB RAM
At least one network card
CD-ROM
Keyboard
Mouse
Unix/Linux
With the multitude of flavors of Unix available, it is impossible to give a clear
list of resources that must be met to have the Unix-based operating system
function on a server. With the source code for Unix open to public alteration
and tweaking, the variations of this operating system increase and grow on
a regular basis. The following is a general hardware requirement list for
Mandrake Linux (one of the many Unix distributions).
Intel Pentium processor
64MB RAM
500MB hard drive space
CD-ROM
VGA
OS/2 Warp
OS/2 Warp is a server operating system designed by IBM for their server
line. It is a scalable OS with support for numerous business solution prod-
ucts. In the words of IBM, “IBM solutions target today’s heterogeneous,
open computing environment.” This server solution is often installed and
configured upon arrival from IBM and requires little further configuration.
Features
Features of an operating system are often a major element in making your
decision to purchase one OS over another. In the past, the features were
dramatically different between the operating systems. Windows NT came
with a graphic user interface, while older versions of Unix and NetWare 3
did not—they were command-line-based, much like old DOS. This feature in
itself led to increased sales for Microsoft because the GUI eased the daily
chores of installing users, printers, mapped paths, and security.
Other features to consider include ease of installation and setup. Will
there be a steep learning curve involved in implementing this server opera-
ting system? What are the desired uses for the server on the network? We
previously mentioned the e-mail server and e-mail viruses. If the server is
to be an e-mail server, then the use of a Microsoft server operating system,
even with virus protection, is still a risk.
Another area of consideration is interoperability with the rest of the
operating systems within the network. What are the clients using, both
as hardware/operating systems and also software? Will it be compatible
with the new server operating system? Will there need to be adjustments
made to the clients’ computers? Will this server operate in an environment
with other servers using different NOSs? What network protocols will be
used as communication between devices on the network as a result of the
server operating system selected?
Windows historically has relied on NetBEUI as its protocol of choice
(up until Windows 2000), Novell has favored IPX/SPX (until NetWare 5),
and Unix prefers TCP/IP. Today all three manufacturers have agreed upon
using TCP/IP, but you will probably work in a heterogeneous environment
because it is a rare company that has the fiscal resources to purchase the
latest hardware and software as soon as it comes out. This leaves you often-
times in an environment with dated Novell or Microsoft servers, or both
operating together.
Key features of each operating system will be discussed within the next
few sections. The thing to remember is that an assessment of your network
needs will reflect heavily on the operating system and version that you
decide to use.
Costs
Cost is always a concern. As much as we would like to have the best of every-
thing in the world, the reality is that few of us can financially do it. Server
operating systems will range in price dramatically as the issues of licensing is
brought up.
Unix/Linux is by far the least expensive operating system. In many circles
it is free for distribution or a nominal fee is charged. Due to its open archi-
tecture, it can then be reengineered to best meet your business needs. The
concern with Unix, at times, has been support for third-party drivers. You
might not be able to locate a driver for your video card or network card, for
example. Driver manufacturers, as of late, have seen the demands and trends
toward using Unix as a common operating system and have been busy
playing catchup, designing Unix-specific drivers.
Windows servers come in several forms. Each was designed to meet a
specific business need. Windows NT Server could be purchased as well as
Windows NT BackOffice Server 4.5; the latter provided an entire suite of
software programs that were designed to control everything from Internet
proxy to database sharing. Windows 2000 also offers a few different
options, including Windows 2000 Server and Windows 2000 Advanced
Server. Windows 2000 has been rather expensive; current price for Win-
dows 2000 Server and a five-client license is $1,344.99. The expense
increases dramatically with licensing additional users. Not only do you
have to pay for the operating system but a license fee must also be paid
for each computer that will be connecting to the server.
Before purchasing client access licenses, you must determine which
licensing method your server will use. The two choices are Per Seat and
Per Server. You can only use one licensing method in the server. If you
choose Per Server, you can switch to Per Seat at some point in the future,
but once you are licensed Per Seat, the decision is permanent. Microsoft
recommends that if you are unsure of which mode to choose, you should
choose Per Server, since that allows you the flexibility to change modes
later on.
Licensing Per Seat means that it is the client end of the connection that
holds the license, and that license can be used to connect to any server on the
network. Licensing Per Server means that it is the server end of the connec-
tion that holds the license, and there must be an available license in order
for each client to connect. There are two factors that tell you which is the
cheapest option for your server:
Number of servers on the network
Number of concurrent connections that clients will make to the
server(s)
If you have only one server on your network, it will most likely be best to
choose Per Server, because you will only have to purchase enough licenses
to equal the number of concurrent client connections. In this scenario, you
could potentially have many fewer licenses than client PCs. This is particu-
larly true if the clients connect to the server, use the server resources, and
quickly disconnect. If client PCs will maintain a connection for a long time,
then the number of licenses will probably equal the number of clients, which
is the same cost as licensing Per Seat in the case of having only one server. An
example where this strategy would work well would be in a remote access
server. If clients are connecting into the server remotely to check their mail
(for example) and then disconnecting, there is no need for a license for every
computer: Per Server is the best option.
If you have more than one server on your network, it will most likely be
best to choose Per Seat. The only way it wouldn’t be the best way is in the
peculiar case of having very few concurrent client connections. When licens-
ing Per Server, each server contains a pool of licenses, so that if one server
has 25 licenses, and another server has 10 licenses, you can only legally
connect 25 clients to the first server and 10 to the second. In this same
situation, if you licensed Per Seat, you could have 35 clients connect to
either server or both servers simultaneously.
In the real world, there are very few situations that will warrant Per Server
licensing. Only if you have a single server or very brief server connections
will Per Server be cost effective. In contrast, Per Seat licensing allows for
client PCs to connect to as many servers as are available on the network, with
no thought for other clients’ concurrent connections.
Novell NetWare
Originally known as Share Net, Novell’s NetWare NOS has been evolving
for nearly 20 years, making it the dean of the three NOS platforms we will
be discussing.
Share Net debuted in 1983, and became NetWare shortly after. Early ver-
sions of NetWare were extremely successful in competing with Microsoft’s
LAN Manager and Banyan Vines. Both of these products have followed the
evolve or die motto, with LAN Manager evolving into Windows NT/2000,
and Banyan Vines dying.
NetWare has a bit of both of these possibilities in its past. In the mid
1990s it seemed as though NetWare was everywhere (most estimates showed
that over 80 percent of all LANs ran on NetWare in 1995). When Novell
brought out NetWare 5.x with a distributed network directory based on
the x.500 standard, they appeared certain to crush all other competitors.
Their NOS was better technologically than any of their competitors’, they
had terrific market share already, and their customers were dedicated to the
company and the product.
Oddly, that was about the time the wheels came off.
Four things seem to have occurred, more or less at the same time, that
caused Novell serious problems:
1. The Web started reaching corporate networks around 1995.
In short, here is the reading of one analyst about how this affected the
company: Even as the most important technological revolution of the decade
was going on around them, Novell spent its resources developing a word pro-
cessor, and neglected to market their superior NOS. Microsoft, meanwhile,
took advantage of the fact that faster hardware allowed their GUI servers
to compete (somewhat) with Novell, and hit the advertising/marketing trail
hard for NT. Microsoft quickly standardized on TCP/IP, and rode the
wave of the Internet. NetWare did not switch to all TCP/IP for another
four-plus years.
Because of this, much of Novell’s market share has evaporated, and the
company has a number of bridges to repair. Even so, there are significant
reasons for optimism. They still have a great NOS, they have dumped
WordPerfect, and they have a new management team.
Here are some of the specs you will want to know about NetWare for
the exam. These are generally written with an eye toward giving you “just
enough NetWare.” The same practice will be followed for the other OSs.
If you want detailed knowledge, check out the links and books mentioned.
NetWare 3.x
NetWare 3.x included NetWare 3.11 and 3.12, based around the product
known as NetWare 386 having been introduced concurrently with Intel’s
386 chip. NetWare 3 supported multiple cross-platform clients (Microsoft,
Apple) and had minimal hardware requirements (4MB RAM, 74MB hard
disk space); this allowed for NetWare to be installed in low-cost environ-
ments. NetWare 3.x used a database called the Bindery to maintain groups’
and users’ accounts. The three major utility programs used (through a
command line interface) to control a NetWare 3.x server were Syscon,
PCONSOLE, and FILER.
Syscon was used for user administration of the Bindery.
PCONSOLE was used for printer setup.
FILER was used for file operations.
NetWare 4.x
NetWare version 4.x was released in 1994 and offered a new centralized
administration service called NDS (Novell Directory Service). NDS not only
eliminated the need for the three separate programs of NetWare 3 (Syscon,
PCONSOLE, and FILER) but also allowed for administration of numerous
servers through one console. Prior to version 4, changes had to be made indi-
vidually on each server in the network. This was both a time-consuming and
cumbersome task.
The first release of NetWare 4.0 was fairly buggy and soon was replaced
with 4.1 and then 4.2, which was stable. Version 4.2 was released as a step-
pingstone toward version 5. It is worth noting that during the release of
version 4 Novell changed the name of the product to IntranetWare. Some
believe that this was a marketing ploy to take advantage of the Internet craze
that was forming at this time. The name was subsequently changed back
to NetWare in version 5.
NetWare 5.x
NetWare version 5 made a radical change in network communication for
Novell. Up to this point the Novell network protocol of choice was IPX/
SPX. With the release of version 5 Novell switched to TCP/IP as the protocol
of choice. This change was in part due to the sweeping support for TCP/IP
caused by the constant expanding Internet. The protocol of the Internet
is TCP/IP. NetWare 5 also included support for a multiprocessor kernel.
Previous versions of NetWare supported multiprocessors, but with the
addition of another NLM. Other added features included a five-version
license of Oracle 8 (a relational database software) and the inclusion of
Z.E.N.works, which provided for management on the workstation side.
NetWare 6.x
A major improvement in the just released NetWare version 6 is the eDirectory,
an upgrade to the NDS structure introduced in Netware 4x. Although
eDirectory can be installed, maintained, and supported on non-NetWare
platforms, including Windows NT/2000 and Linux/Unix servers, NDS
runs only on NetWare servers. You no longer have to create and maintain
NetWare Architecture
All operating systems are modular to a degree. Key components make up a
core, which is the operating system. On this core other modules or programs
are added. Novell is a classic example of this design idea in practice. The
major component of NetWare is the kernel or core OS. Built on the kernel
are NLMs (NetWare Loadable Modules). By creating a structure such as
this, disk space can be conserved by selecting which components to load and
which to not load. There are four key NLMs: disk drivers, LAN drivers,
name space modules, and utility NLMs.
Disk drivers provide access to disks and disk resources. NetWare version 3
used a .DSK extension to allow access to IDE drives. With the release of
newer versions, this extension changed to .HAM or .CDM.
LAN drivers interface between the NetWare kernel and the network
card. This is an obviously important area, because the server must have a
connection to the network. These drivers typically have a .LAN extension.
Name space modules control how files look and are stored on the server.
By default NetWare stores files using the old DOS naming convention.
This is a file name of eight characters long, a period, and then a three-letter
extension. Because different client operating systems that are accessing the
server will use different file storage and naming conventions, the name space
modules act as a buffer to mediate between the different files. The extension
for a name space module is .NAM.
Utility NLMs basically contain all the other items that do not fall into any
of the previously mentioned categories. More then 70 percent of NLMs
fall into the utility category, including print drivers (Novell Distributed
Print Services).
Each module is selectively loaded and linked to the NetWare kernel.
This creates the Netware operating system.
NetWare Administration
Administration for a NetWare server is actually done remotely. A separate
client utility called Novell’s NetWare Client for Windows 95/98 must be
installed on a client machine in order to control a NetWare server. This
allows the server to be physically locked up and secure. In NetWare version 3,
access is gained to the Bindery through a client machine running the Net-
Ware client software. You must then log into the server as an Admin or
a user with administrator rights. From this point you can gain access to cre-
ate users and manage the server. In NetWare versions 4, 5, and 6, use the
NDS rather than the Bindery. A Java applet can also be used rather then
the command line utility or menu-based monitor starting in version 5.
NetWare Interoperability
Novell NetWare supports Windows 95/98, Windows NT, Mac OS, VMS,
OS/400 Unix, and OS/2 clients. Prior to NetWare version 5, Novell ran
IPX/SPX natively. Starting with version 5, TCP/IP is the protocol of choice.
Windows
In comparison to the other major vendors, Microsoft is the new kid on the
block. And in being the new kid, Microsoft has taken a lot of heat over
the years due to the glitches and the number of repair software patches
that have been released to fix known problems.
Over the recent years Microsoft has released a multitude of operating
systems. The majority have been OSs for desktop computing but there have
been releases for network operating systems too. Windows NT was the first
Microsoft network operating system. It saw major changes and then was
replaced by Windows 2000. Repair of known issues as well as advanced
features and support for modern hardware are some of the key benefits
of Windows 2000.
Windows NT
Microsoft first attempted to create a network operating system with the
release of Windows NT (New Technology). Version 3.1 was the first release
to the public and it came out in 1993. Much like the original release of Win-
dows 95, NT 3.1 was quite buggy and problematic. In the world of servers
that is a cardinal sin. For this reason NT 3.1 was really not taken very seri-
ously. NT 3.51 was released about a year later. Better stability, support for
new hardware, and a familiar interface (built with the Windows 3.1 GUI)
placed Microsoft into the network operating system arena that was previous
dominated by Novell and Unix. With time Microsoft released Windows 95
as well as NT 4, which was based around the Windows 95 GUI. The clever-
ness of Microsoft marketing shone through again. Using the GUI from an
operating system that the public was familiar with ensured less of a learning
curve in using NT 4 as well as eliminating some intimidation.
Stability issues and concerns from version 3 had been dealt with before
the release of Windows NT 4 in 1996, and NT 4 became increasingly pop-
ular. The gamble of creating a network operating system with a familiar
interface paid off and NT 4 vaulted Windows past Novell in network oper-
ating system sales as well as third-party programs being written for the OS.
Added components in version 4 included IIS (Internet Information Server), a
web server, and the Internet Explorer web browser. Windows NT 4 became
widely accepted as an OS for an enterprise server but not for a backbone
server—possibly due to fears dating from version 3’s instability, or possibly
the newness of this operating system compared to Novell or Unix.
Windows 2000
Windows 2000 Server was developed to address the fears in using Win-
dows as a true server operating system in large environments. With the
release of Windows 2000, a change in protocol use also occurred. Prior
to Windows 2000, Net BEUI was the native protocol of choice. Win-
dows 2000, much like NetWare, switched to TCP/IP as its main protocol
to be better supported with the Internet. Previous versions of Windows
Server used NTDS (NT Directory Services) to control user accounts and
groups security. Windows 2000 switched to AD (Active Directory). This
new service was more in line with Novell and followed the X.500 standards,
using a hierarchical approach to naming conventions. Windows 2000 also
improved on network and system performance monitoring by creating a
management utility called Administrative Tools. Figure 7.1 is a screen shot
of Administrative Tools. Administration is now done centrally through the
MMC (Microsoft Management Console) in Windows 2000.
Windows Interoperability
A server never operates in an isolated environment. Clients and other serv-
ers connecting to the server come from various sources and platforms such
as versions of Windows, NetWare, Unix, and Apple. Windows NT offers
client and print services for Novell, Apple, and Unix. The Novell services
are called GSNW (gateway services for NetWare), CSNW (client services
for NetWare), and FPNW (file and print services for NetWare). Macintosh
clients can also access a Windows server, but client software will have to be
installed. All Microsoft operating systems are also natively supported
(including DOS and Windows 3, 95, 98, and Me).
Windows Administration
Both NT and Windows 2000 are administered through the User Manager
for Domains. It is located in the Administrative Tools folder. Through this
utility you can create both users and groups for the local computer or for the
network. This simple utility gives a clear visual of the users on the network.
Unix/Linux
It is claimed that the first version of Unix was invented in 1969 at Bell Labs.
Regardless of the date, Unix is definitely the oldest network operating system
in use today. Even though there has been constant change in the appearance
and even function with each new flavor, the core of the operating system
remains the same.
Unix Architecture
Unix uses a 32-bit command-line-based core capable of supporting a GUI
(often called X Window). Unix natively supports TCP/IP as its primary
protocol. The Server+ Exam does not cover Unix operating systems very
much, but be aware of some of the essential flavors of Unix as well as
their benefits.
Linux
Linux is a Unix flavor that has been receiving a lot of attention over the
last few years. Linux version 1 was released in 1994 and has been updated
constantly since that point. Linux prefers to be run on an Intel platform, but
successful attempts have been made with RISC processors as well as on a
Macintosh.
Two releases of Linux should be mentioned: Red Hat Linux and Slack-
ware. Red Hat Linux is a portable version that will run on an Intel, Alpha,
and Sparc processor. Slackware was designed for the Intel platform only and
will support up to 16 processors, as well as Ethernet networking.
SCO Linux
Santa Cruz Operation Linux makes OpenServer and UnixWare. Open-
Server is robust and scalable and is used with Intel equipment. UnixWare
was obtained by SCO Linux from Novell in 1997. UnixWare provides
interoperability with Novell-based networks as well as being easy to
administer and install.
Sun Solaris
Sun Microsystems created its own Unix called Sun Solaris. It was designed to
run on a SPARC platform, rather than an Intel. Sun sells both the server
hardware and operating system together designed for Internet servers.
Unix Administration
Unix administration is most commonly done through a GUI called X Win-
dow or through a command line utility called a shell. There are three major
shells used: Bourne, C, and BASH. New accounts are made through modi-
fication of the /etc/passwd file. Normally administration is done through
the X Window utility, where the User Configurator is used.
Installing a NOS
I nstalling a network operating system can be a confusing process.
If you have ever formatted and reinstalled a desktop PC you will remember
that there are many processes and steps that have to be followed in a
specific order. Drivers and disks that support items such as video cards,
network cards, and SCSI cards will have to be installed with the assistance
of manufacturer-supplied drivers. At times, patch files and firmware will
have to be used as well. (Refer to Chapter 11, “Managing and Securing the
Server Environment,” for more information on patch files.)
More on POST and error messages will be covered in Chapter 12, “Perfor-
mance and Hardware Monitoring,” and Chapter 15, “Disaster Recovery.”
For more information see Chapter 10, “Hardware Updates,” and Chapter 11,
“Software Updates.”
Client Access
Once you are sure that the operating system is running stably with the hard-
ware as well as network software, access can be granted to the clients slowly.
You may have wondered why, a few steps earlier, I mentioned installing net-
work software and partially testing it before there were clients accessing it to
ensure that it was running fully. Granting client access is usually the last step
when installing or upgrading a server. Until you are sure that the server is
fully functional and capable of handling the stress of client requests, you
should not allow access to it. Not only will you have to deal with an improp-
erly running server if you grant access too early, but everyone who has
connected to it will be calling you and letting you know that the server is
down. Clients may also take the liberty of starting to store files or use the
applications on the server, and if you need to restart the server for some
reason, you are going to get caught in the middle of some agitated personnel.
It is therefore best to ensure that everything is running ideally before granting
access.
Probably the best situation I have had the chance to work in was a few sum-
mers ago when the school I worked for decided to expand and purchase
a lot of new equipment. When the shipments arrived, I found myself on
the floor surrounded by five IBM Netfinity 5500 servers, five Intel 510T
switches, and 80 Dell laptops—all of which had to be networked. The nice
thing was that I had all summer to perform the task without anyone over my
shoulder asking if it was ready yet because they needed to check their
e-mail. Having the space and time planned out allowed me to carefully and
methodically unpack, inspect, install, and configure all the equipment. All
the hardware and software could be tested and run for days to confirm that
they would perform as expected. When problems did arise, there was
ample time to research and fix each situation.
Summary
C hapter 7 is all about network operating systems. Focus was given to
the major types of operating systems and their differences. We began with
an exploration of network operating system options including application
compatibility, hardware requirements, features, and cost. Each element was
discussed in comparison to the three major network operating systems:
Windows NT/2000, Novell NetWare, and Unix/Linux. OS/2 Warp, IBM’s
server operating system, was also introduced as another alternative.
Novell NetWare was released in 1983. It used a command-line-based
utility that was administered remotely from another computer. Major
releases of NetWare include version 3, which used a Bindery to maintain
usernames and accounts; version 4, which switched from the Bindery to
NDS (Novell Directory Services); version 5; and recently version 6. Novell
natively ran on IPX/SPX until version 5, which changed to TCP/IP.
Exam Essentials
Know the three main network operating systems. This includes Win-
dows NT/2000, Novell NetWare, and Unix.
Know the variables to consider when selecting a NOS. Application
compatibility, hardware requirements, features, and cost are the main
variables to consider.
Know the basic installation requirements for each major NOS. Novell
version 6 requires an Intel Pentium II or higher processor with 2GB of
hard drive space, 256 of RAM, a CD-ROM drive, and a VGA adapter.
Windows 2000 requires an Intel Pentium 133, 128MB of RAM,
1GB of hard drive space, a mouse, keyboard, CD-ROM drive, and
a network card.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. Which network operating system is referred to as a Warp Server?
A. NetWare
B. Unix/Linux
C. Windows 2000/NT
D. OS/2
B. Unix
C. Windows 2000/NT
D. OS/2
E. Linux
B. Unix/Linux
C. Windows NT
D. OS/2
C. Windows 2000
D. NT 4
B. Unix/Linux
C. Windows 2000/NT
D. OS/2
B. Unix/Linux
C. Windows 2000
D. OS/2
B. Unix
C. Windows 2000/NT
D. OS/2
A. Bourne
B. Bash
C. X Window
D. DOS
B. PCONSOLE
C. User Manager
11. Which version of Linux is designed for an Intel platform and will
support networking?
A. SCO Unix
B. Slackware
C. Red Hat
D. Susie
B. NetWare 5
C. OS/2 Warp
D. Linux
A. Syscon
B. Administrator
C. Bindery Manager
D. PCONSOLE
E. User Manager
B. Unix
C. Windows 2000/NT
D. OS/2
A. 64MB
B. 128MB
C. 256MB
D. 32MB
16. Which NOS would you most likely find using Visual Basic scripts?
A. NetWare
B. Unix/Linux
C. Windows 2000/NT
D. OS/2
A. Name space
B. Utility
C. Disk
D. LAN
B. Windows NT 4
C. Novell NetWare
D. Windows Me
A. Unix
B. Windows 2000
C. NetWare
D. OS/2 Warp
B. Unix/Linux
C. Windows 2000/NT
D. OS/2
TCP/IP Explained
T CP/IP has is an Internet standard protocol that is implemented on
most networks today. It is a suite of different protocols that provide com-
munication capabilities between network computers. The following section
will provide you with a brief overview of the TCP/IP protocol suite and the
role each protocol plays in network communication.
What Is TCP/IP?
TCP/IP has become the most widely used protocol. It is the protocol used on
the Internet and on routed networks. TCP/IP is a suite of protocols that maps
to a four-layer reference model known as the DoD (Department of Defense)
Model. The Model basically describes how communication between two
hosts on a network occurs.
You may also be familiar with the OSI Model. This model was developed
to implement a standard on how network communication occurs. Vendors
design their products based on the OSI Model. Essentially the OSI Model
enables communication between software and hardware regardless of the
vendor (as long as the network components are designed to the standards of
the model).
The OSI Model divides network communication into seven layers, as
opposed to the four layers used in the DoD Model. Each layer in the DoD
Model maps to one or more of the seven layers in the OSI Model.
The four layers of the DoD Model are as follows: Application, Transport,
Internet, and Network. The following section will discuss the different layers
of the DoD Model and how the different protocols making up TCP/IP map
to this four-layer model.
Application Layer
The top layer of the DoD Model is the application layer. This layer defines
how applications communicate with one another on a network. Application
layer protocols often provide support to client/server applications. A client
application running on one computer will communicate with the server appli-
cation running on another computer using the services of application layer
protocols. Let’s take a moment to review these protocols in detail.
Simple Mail Transfer Protocol SMTP (Simple Mail Transfer Protocol)
is used to send e-mail messages from one mail server to another. SMTP is
usually used to send e-mail messages over the Internet. The e-mail mes-
sages are then retrieved using POP or IMAP. When you configure your
e-mail application, you have to specify the SMTP server that your e-mail
application will be sending mail to.
Simple Network Management Protocol The Simple Network Manage-
ment Protocol is a set of protocols that are used for collecting information
about a network. SNMP agents are network devices such as computers,
routers, and bridges that gather information about themselves and return
the information to a system running a SNMP management program.
File Transfer Protocol The file transfer protocol is used to transfer files
between computer systems. It is a standard protocol that allows files to be
transferred between dissimilar systems. The user transferring files will
usually need a username and password with permission to access the file
unless a guest account is being used on the remote system. FTP uses the
TCP protocol to transfer the files.
Transport Layer
The next layer of the DoD Model is the transport layer. The protocols at this
layer define the type of transmission service between two hosts. The trans-
mission services provided by this level can be end-to-end and reliable or
broadcast-based and unreliable. The protocols operating at this level provid-
ing transmission services include TCP and UDP.
data between hosts. It takes the messages from the application layer and
breaks them down into smaller segments and uses sequencing. The difference
is that UDP is not concerned with acknowledgments and does not resend
packets that do not reach the destination host. Reliable delivery and error
checking are performed by protocols working at the upper layers of the
DoD Model.
Internet Layer
Protocols operating at the Internet layer are responsible for things such as
IP addressing and routing. This layer adds addressing information to the
packet before it is placed on the network, resolves IP addresses to MAC
addresses, and determines where a packet will be sent in order to reach the
destination host. The protocols that work at this layer include IP, ARP,
ICMP, and IGMP.
Internet Protocol
IP is by far the most important protocol working at the Internet layer and
is often referred to as the mailroom of the TCP/IP protocol suite. It is here
where packets from the upper layer are broken down into datagrams. IP
adds addressing information to each datagram including the IP address of
the sending computer and the IP address of the destination computer (refer
to the section titled “IP Addressing” for more information). IP also performs
routing functions because it is responsible for determining the best route for
a datagram to be sent to reach the destination host.
BootP
The BootP (boot program) protocol allows diskless workstations to boot up
and send out a BootP request so the workstation can receive an IP address.
The broadcast contains the MAC address of the client. A BootP server picks
up the request and looks to the BootP file. If the BootP file contains an entry
for the workstation’s MAC address, the BootP server responds with the work-
station’s IP address and the name and location of the file it should boot from.
Network Layer
There are no protocols and services defined at the network layer but it does
perform some very important functions. Essentially, it takes datagrams from
the network layer and breaks them into bits (1’s and 0’s). Then it adds the
MAC address to the packet before it is placed on the network. Once placed
on the network, it will determine the access method used by the network (for
example, token passing, collision avoidance, collision detection). After it
defines the network’s access method, it shifts its focus to the physical aspects
of the network. The network layer determines the physical aspects of the net-
work, such as the media and connectors.
IP Addressing
N ow that you have an understanding of the protocols that make up
TCP/IP and what each one is responsible for, let’s move on to IP addressing.
Introduction to IP Addressing
IP addresses are used to identify hosts on a given network. Every host
(this includes computers, routers, and network interface printers) on an IP
network requires an IP address. Each segment on a given network requires
a unique network ID and every host on a segment requires a unique host ID.
An IP address can be seen in two different formats—as a decimal number
and as a binary number. Computers read an IP address as a 32-bit binary
number like the one shown below:
10110111 11101111 10110011 10101011
For computer users, the IP address is viewed in decimal format. The
binary number, like the one shown above, is broken down in four sections
(called octets), each section containing eight bits. Each of the four octets is
then converted to a decimal format by converting each bit to a number value
and adding them up, giving you an IP address like the one shown below:
183.239.179.171
Binary uses 1’s and 0’s. Each bit in a binary number has a corresponding dec-
imal value. The bit values within each octet are converted to a decimal format
and then totaled. Table 8.1 shows the decimal value for each bit in an octet.
Binary 1 1 1 1 1 1 1 1
Decimal 128 64 32 16 8 4 2 1
Using this table, the binary number shown below can easily be converted
to the decimal number below it.
11000000 10101000 00011000 10000100
192.168.24.132
IP Address Classes
Those who designed the Internet also broke the available IP addresses down
into different classes that could be assigned based on network size. The
number of hosts you can have per network address depends on the class
of address that you have been assigned. How you determine the network
and host ID’s within an IP address is also dependent on the address class.
Table 8.2 summarizes the five address classes.
The value 127 is not included in the available ranges; 127 is reserved for test-
ing purposes and cannot be assigned to any computer. 127.0.0.1 is referred to
as the loopback address and can be used to test whether TCP/IP is initialized
on a local computer by typing PING 127.0.0.1. Essentially what the computer
is doing is pinging itself.
Certain IP address ranges from the different classes have been excluded
for use on the public Internet. These are known as private IP addresses and
are only used on private networks. If you implement a private IP address
range on your internal network but still want to access the Internet, you will
have to implement some form of gateway, such as Microsoft ISA Server, that
has an interface with a valid Internet IP address. Table 8.3 lists the private
IP address ranges.
10.0.0.0 255.0.0.0
172.16.0.0 255.240.0.0
192.168.0.0 255.255.255.0
The problem with the current address classes is that they are very ineffi-
cient. For example, assigning a network of 10,000 users a Class A address
means that a large number of host addresses are going unused. A network
with 10,000 users being assigned a network address from a class that can
support up to 17 million hosts per network is inefficient. Also, with a typical
network today being essentially an internetwork—a single network made up
of multiple segments—how is routing going to occur if you have only been
assigned a single network ID. The following section will introduce the topic of
subnetting, which has been developed to overcome the current limitations
of the address classes.
Subnetting
Networks today are growing in size, spanning geographical locations, and
often consist of many different segments (networks within networks). You
may be asking yourself why you would want to divide a network into several
different distinct segments. Here are a few reasons:
It allows you to connect different types of networks such as an
Ethernet network with a token ring network.
Remote offices or locations can be made a part of the main network.
Traffic can be limited to a local segment, thus reducing broadcast
traffic.
If a network is to be divided into distinct segments, each segment will
require its own unique network ID. If you have a single network ID that has
been assigned to you by InterNIC, you must take that network ID and divide
it into other network ID’s.
Subnetting overcomes the limitation of a single network ID. Subnetting
involves taking a single network ID and splitting that into multiple subnets
(or multiple network ID’s). Each segment is separated by a router, and a
custom subnet mask is created so there is a way to distinguish between the
different subnets. Essentially what you are doing is taking a single network
ID and dividing it up into smaller subnets.
Since the subnet mask is used to determine the network ID of an IP
address, you will need to create a custom subnet mask. This is accomplished
by taking away some of the bits used for the host ID’s and creating a subnet
address. The bits that you are taking away from the host ID’s are now going
to be used to identify the different subnets within the network (this also
means that you will now have fewer host ID’s available on your network).
Take a look at a simple example to illustrate this: Suppose you have been
assigned a Class B address of 154.123.0.0. The default subnet mask associ-
ated with a Class B address is 255.255.0.0. Computer 1 and Computer 2
have been assigned the following IP addresses:
Computer 1 – 154.123.15.5 255.255.0.0
Computer 2 – 154.123.20.5 255.255.0.0
Using the subnet mask it can easily be determined that the two computers are
on the same subnet (by looking at the first two octets of each IP address to
see if they correspond). Now suppose the default subnet mask for the Class B
address is changed to the following because the network is divided into dis-
tinct segments and Computer1 and Computer2 reside on different subnets:
Computer 1 – 154.123.15.5 255.255.255.0
Computer 2 – 154.123.20.5 255.255.255.0
The custom subnet mask now indicates that these two computers are on
different subnets. Instead of looking at the first two octets of the IP address
(which is the default for a Class B address), the subnet masks now indicate
that the third octet is used to identify unique subnets. Since the decimal value
of the third octet does not correspond between the two IP addresses, the
computers are on different subnets.
Implementing a custom subnet mask may seem like a complex process but
it follows a very logical process. Use the steps outlined below to plan for your
subnetted network.
1. Determine the number of subnets or the number of network ID’s that
will be required taking into account future growth. A unique network
ID is required for each subnet and each WAN connection.
2. Determine the maximum number of host ID’s you are going to need
for each subnet. A host ID is required for each network card using
TCP/IP, any network interface printers running TCP/IP, and for each
router interface.
3. Create a custom subnet mask based on the above information that
provides the necessary number of subnets and hosts per subnet.
Let’s take a look at another example to illustrate the process of creating
a custom subnet mask. Suppose you have been assigned the network address
of 131.107.0.0 and a subnet mask of 255.255.0.0 (Class B). Normally the
first two octets would be used for the network ID and the last two octets for
the host ID’s.
Notice that the binary value for the subnet mask has now increased by three
bits as compared to the default subnet mask.
IP Version 6
Since the Internet has grown in popularity, there is a shortage of IP
addresses. The discussion so far has been focusing on IP version 4. The
next version of IP, version 6 (IPv6), will overcome the addressing limitations
of version 4 by using a 128-bit address. It will no longer be expressed as a
Assigning IP Addresses
TCP/IP Utilities
There will be instances in a network environment when you experi-
ence TCP/IP connectivity problems, such as a client is unable to see a server
on the network. TCP/IP comes with several utilities that can be used to trou-
bleshoot connectivity problems that do arise. You should be familiar with
these tools if you administer an IP-based network or if your internal network
is connected to the public Internet.
Some of the common TCP/IP utilities that are used for troubleshooting
include the following:
ARP
PING
Tracert
Ipconfig
Netstat
Telnet
Nbtstat
Many of these utilities can perform various functions depending on the
switches you use with the command. The switches available may depend on
the operating system you are running. The following section will outline
each of these utilities and how they may be useful in troubleshooting com-
munication in a server environment.
ARP
ARP stands for address resolution protocol. When a host on an IP-based net-
work wants to send data to another host, the host name must be mapped to
an IP addresses and the IP address mapped to a MAC address. ARP is the
protocol responsible for mapping IP addresses to MAC addresses. It does
this by sending out a broadcast packet containing the IP address of the
intended host. The host owning the IP address responds to the broadcast
with its MAC address.
Each host also maintains an ARP cache containing IP-to-MAC address
translations. Before a broadcast is made to resolve a MAC address, the ARP
cache is examined to see if there is an entry. The main purpose of this is to
cut down on the amount of broadcast traffic on a network. The syntax for
viewing the ARP cache on a local computer is as follows:
Arp -a
By default, entries stored in the ARP cache are dynamic, meaning they are
not permanent and are deleted within two minutes if not referenced. Using
the -s command you can add in a static entry that will not be flushed out.
This is useful for frequently accessed hosts. When adding the entry you must
include the IP address of the host as well as the MAC address. Once the entry
has been added it is listed as static. Table 8.4 summarizes some of the other
switches that can be used with the ARP command.
Switch Function
-s Adds a static entry to the ARP cache for the Internet address
and physical address specified
PING
One of the most commonly used utilities when troubleshooting connectivity
problems on an IP-based network is the PING (Packet Internet Groper) util-
ity. It is a command line utility that is primarily used to see if another host
on the network is reachable and responsive. It works by sending out packets
to another host on the network and waits for a reply.
Most TCP/IP stacks that come with the different network operating sys-
tems come with a PING utility. They function the same, with some small
variations such as the confirmation message that is received.
The general syntax of the PING utility is as follows:
Ping w.x.y.z
where w.x.y.z is the IP address assigned to the host that you are testing
connectivity with. If the host is reachable and responding, you will receive a
confirmation message (see Figure 8.1).
If the host is unreachable or is not responding, you will receive a request timed
out message or a destination host unreachable message (see Figure 8.2).
Once TCP/IP is installed and configured on a server (or client), you can
use the PING utility to determine if some of the parameters, such as the
default gateway, are correctly configured. If your network is subnetted and
connected via multiple routers, you can use this utility to ensure that remote
networks are reachable. For example, if your network configuration is sim-
ilar to the diagram shown in Figure 8.3, you can use PING to make sure that
TCP/IP is correctly configured on Server1 and that your server can commu-
nicate with servers on remote subnets using the steps outlined below.
Server 2
192.168.15.10
192.168.22.11
192.168.2.15 192.168.15.15
192.168.22.10 Router 2
Router 1
Server 1
192.168.2.25
Tracert
Tracert is another TCP/IP command line utility that can be used to trace
the router interfaces that a packet must pass through to reach a destination.
The number of routers that a packet must pass through is displayed as a
hop count. Each router that forwards the packet is one hop. Tracert can be
useful for troubleshooting if you are unable to reach a destination host—for
example, if a router between your server and the destination host is having
problems. Using the utility will allow you to easily determine the router on
the network that is failing to forward packets (see Figure 8.4). Tracert can
also report the time it takes for a packet to reach its destination, which
can be useful in determining how efficient a specific route is.
Ipconfig
Ipconfig is a command line utility that can be used to view the TCP/IP
parameters assigned to a host, such as the IP address, subnet mask, and
nameservers. Ipconfig is very useful on Microsoft operating systems but will
not work on NetWare or Unix systems. To use the utility, type ipconfig
from a command prompt. To view more detailed TCP/IP parameters
assigned to a host, use the /all switch along with the command. This com-
mand can be particularly useful if dynamic addressing is being used. For
example, you can use the command to view and troubleshoot the parameters
that have been assigned.
Netstat
Netstat can be used to view the current TCP/IP inbound and outbound
connections on a computer. It can also provide you with information on
listening ports, TCP or UDP connection stats, Ethernet stats, and it can
display the contents of the routing table. Figure 8.5 shows the output of the
Netstat command using the -n switch. Table 8.5 lists the switches available.
Switch Function
Telnet
Telnet is a remote terminal emulation program that is most often used for
troubleshooting on TCP/IP networks. The Telnet program runs on a work-
station and is used to connect to another host such as a network server. Once
you establish a session with a host using Telnet, you can enter commands
within the Telnet console that will be executed on the remote host or server.
Telnet is often used in Unix environments as well as for configuring and
troubleshooting devices such as routers and switches.
Nbtstat
The Nbtstat utility is used to view NETBIOS over TCP/IP statistics, such
as any current connections using NETBIOS over TCP/IP, protocol statistics,
and any NETBIOS names that have been resolved to IP addresses. Table 8.6
summarizes the switches available with this command line utility.
Switch Function
Summary
In this chapter you learned about the fundamental concepts underlying
the TCP/IP suite of protocols. Since TCP/IP plays such an important role in
networking and server environments, it is important to have some under-
standing of it. TCP/IP maps to the four-layer DoD Model and provides com-
munication services between hosts on a network. Each of the protocols
included in the suite operates at one of the four levels performing a specific
function.
Exam Essentials
Understand the DoD Model. TCP/IP maps to a four-layer conceptual
model known as the DoD (Department of Defense) Model. It defines
four layers, each layer performing specific functions enabling network
communication.
Understand the TCP/IP protocols. TCP/IP is a suite of protocols. Each
of the protocols has a specific role and operates at one of the three upper
levels of the DoD Model. Each protocol plays a role in network commu-
nication between two hosts.
Know the difference between TCP and UDP. TCP and UDP work at
the transport layer providing transmission services. TCP is an end-to-end
connection-based protocol that provides reliable delivery of information
between hosts. UDP is broadcast based and does not provide reliable
delivery of information.
Understand IP addressing and subnetting. Every host on an IP network
requires an IP address and a subnet mask. This is a 32-bit logical number.
It is made up of a network ID and a host ID. To overcome the limitation
of IP addressing, you can implement subnetting. This allows you to take
host bits away for a subnet ID.
Understand the function of the different TCP/IP utilities. TCP/IP
supports many different utilities that can be used in troubleshooting
an IP network.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
ARP POP
default gateway SMTP (Simple Mail Transfer Protocol)
IMAP subnet mask
IP TCP (transmission control protocol)
Ipconfig TCP/IP
Nbtstat Telnet
Netstat UDP (user datagram protocol)
PING
Review Questions
1. Which of the following TCP/IP protocols provide end-to-end reliable
communication between hosts?
A. IP
B. UCP
C. TCP
D. ARP
2. Which of the following utilities can be used to view the TCP/IP con-
figuration parameters on a Windows based computer?
A. PING
B. Ipconfig
C. Netstat
D. Nbtstat
3. What TCP/IP utility will display a list of all routers a packet must pass
through to reach a destination host?
A. Telnet
B. PING
C. ARP
D. Tracert
4. What is TCP/IP?
A. A protocol used in network communication
B. A troubleshooting utility
5. You have just installed TCP/IP on one of your network servers. You
want to test to make sure that it is initialized on the server. What
command can you use?
A. Ping 127.0.0.1
B. Tracert hostname
C. PING hostname
D. Ipconfig 127.0.0.1
A. TCP
B. IGMP
C. ICMP
D. UDP
B. ARP
C. ICMP
D. TCP
B. 255.255.0.0
C. 255.255.255.0
D. 255.255.255.255
A. Network
B. Internet
C. Transport
D. Application
B. Telnet
C. Netstat
D. Tracert
C. 255.255.255.0
D. 255.255.255.255
13. What DoD layer is responsible for taking datagrams and breaking
them up into bits, before adding the MAC address and placing the
data on the network media?
A. Network
B. Application
C. Internet
D. Transport
B. SNMP
C. PPP
D. NDP
B. IP
C. UDP
D. ARP
A. 192.0.0.0
B. 0.168.10.220
C. 0.0.10.220
D. 0.0.0.220
A. 129.158.221.0
B. 0.0.221.15
C. 129.0.0.0
D. 129.158.0.0
18. What utility allows you to display all the current connections for a
server?
A. FTP
B. Netstat
C. Nbtstat
D. PING
B. 240.254.171.26
C. 224.255.171.26
D. 224.254.171.16
20. Which of the following TCP/IP utilities will allow you to view statis-
tics on NETBIOS over TCP/IP?
A. Netstat
B. NetBIOSstat
C. Nbtstat
D. Nbstat
9. D. SMTP and POP are responsible for the sending and receiving of
e-mail messages. Therefore the problem would lie at the application
layer.
10. A. When 131.107.54.221 is converted to binary, the correct binary
number is that of answer A.
11. B. Telnet will allow you to connect to a remote device such as a server
or router and perform remote administration tasks.
12. B. 129.10.115.120 is a Class B address. Therefore the subnet mask
will be 255.255.0.0.
13. A. Datagrams are broken up into 0’s and 1’s and addressed at the
Network layer of the DoD model. The data is correctly addressed with
the MAC address before being sent out on the media to its destination.
14. A. UDP performs the same function as TCP only it is a connectionless
protocol.
IV
Assessment
One of the first steps in beginning to upgrade a server is to determine
exactly what needs to be upgraded. This of course will depend on the role the
server plays on the network and your reasons for performing the upgrade.
If a server has become a bottleneck, you will need to determine what server
component needs to be upgraded. Not performing an assessment can lead
to an unnecessary upgrade being performed and money being misspent. The
following section will look at performing an assessment of your server to
determine when to upgrade and how to determine what component to
upgrade.
Assessment should not be a one-time process; instead, the information
gleaned from constant monitoring can be useful for planning preventative
maintenance and determining upgrade paths.
Monitoring tools provide a means of watching and assessing server
performance. Windows 2000, for example, offers a program called System
Monitor, which is located in the Administrative Tools folder within the
Control Panel. Figure 9.1 is a screen shot from the System Monitor.
Notice how the System Monitor offers a visual graph to represent current
performance of the computer. The information represented in the graph is
user-selected. In the screen shot example, processor time is the option
selected. To select an option simply click the plus button and choose the
counter you want to monitor. Figure 9.2 illustrates the Add Counters screen.
With the System Monitor numerous counters can be used at one time.
This allows data to be collected and compared on several areas at once. The
data accumulated can then be analyzed to locate patterns in system behavior.
For example, does the processor time increase affect the interrupts-per-
second counter?
Besides the System Monitor, the Windows 2000 Performance utility also
shows Performance Logs and Alerts, including Counter Logs, Trace Logs,
and Alerts.
The Counter Logs Monitor allows you to set a period of time during
which the information gathered in the System Monitor will be recorded in a
text file on the hard drive. This allows you to maintain a record of perfor-
mance. This record can be reviewed at a later date or used to compare to data
gathered at another time.
The Trace Logs Monitor lets you trace changes and actions occurring on
a computer. This information is useful in locating the source of potential
problems. If a program action is responsible for server stress, then the Trace
Log will assist in tracking the program through its utilization of server
resources.
The Alerts utility provides a means of setting a threshold on counters.
For example, the processor utilization seen in Figure 9.1 can have an alert set
so an action is performed should the number exceed a user-set value. This
action can include logging a message, sending a network message, starting
a performance data log, or running a program. With an alert set, server
stress can be properly handled before it leads to a bigger problem such as
system failure.
Regardless of the monitoring tools that are used (numerous third-party
software packages are also available), the idea is to seek out potential bot-
tlenecks. Bottlenecks are locations where the performance is hindered due
to poor performance. This can include processor speed, amount of RAM,
and hard drive speeds. System monitoring will assist in locating these prob-
lem areas. Monitoring tools used over a period of time will also assist in
locating areas in which performance is slowly decreasing. This is a common
problem that often goes unnoticed until it has become a serious issue.
Assessment also takes on the form of analysis of both the need to upgrade
as well as the upgrade procedure itself. With the assistance of Performance
Logs and Alerts, the decision to upgrade computer hardware or software can
be planned and carried out before poor server performance becomes critical.
This saves you the hassle of dealing with an emergency upgrade at a less than
opportune time. Since most servers are extremely expensive and complex
devices, it requires careful planning and assessment before deciding on an
upgrade.
Why Upgrade
As time goes on, networks change, and changing networks have a direct
impact on servers. Whether it is an increase in size or the use of a new net-
work service, these changes have a direct impact on the workload placed on
the server. To meet the demands of a changing environment and to be able
to perform under the expected workload, servers will need to be upgraded at
some point. This upgrade may be a hardware upgrade, a software upgrade,
or both.
You may think that the only reason you would upgrade a server is because
a component is failing or becoming a bottleneck, but there are several other
reasons as well. The following topics describe a few of the reasons why you
might need to upgrade a server.
Server Role
As you saw in Chapter 1, a server can play many different roles in a network
environment. The type of a role that a server is playing will determine its
hardware and software requirements. For example, one of the most impor-
tant components in a file server is the disk subsystem, but the most important
component in an application server is the processor. Changing a server from
one role to another may require that the server be upgraded.
Network Growth
An increase in the size of a network will directly impact the workload being
placed on your servers. An increase in the number of users means an increase
in the number of users accessing network servers, whether it is for logon
validation, accessing shared resources and applications, or other services.
A server that was once capable of handling the workload might now need to
be upgraded to meet this increase in demand.
storing user data, you may quickly run out of available storage space and
find yourself needing to upgrade.
What to Upgrade
The examples given above describe just a few of the possible reasons for
upgrading hardware components and software on a server. Regardless of the
type of upgrade you are performing, preplanning is a must.
Determining what server component/components need to be upgraded
can be a difficult task and will depend on your reason for performing the
upgrade. If you are changing the role the server plays, you will need to assess
what components are most important for that given role. If you plan to make
an existing server a file server, you will probably be most concerned with
upgrading the server’s disk subsystem. If your reason for upgrading is poor
server performance, you will need to take some time assessing the server to
determine which component is causing the bottleneck. This is where a server
baseline becomes important (which also falls under the category of server
maintenance).
Server baselines can be used by network administrators to gauge the per-
formance of a server over time. The baseline helps to establish what normal
or acceptable performance for a server is. Using the information gathered
over time, you can monitor the different server components to see if they are
functioning individually and as a whole.
You’re not going to spend all of your time establishing baselines. You will
want to establish one when you first get the server running properly, and
then each time you add a component or make a major change, you will want
to re-establish new baselines as necessary. This will allow you to see how the
performances of different components change over time (establishing server
baselines will be covered in much more detail in Chapter 12, “Performance
and Hardware Monitoring”). Once you see deterioration in server perfor-
mance, you can monitor your server, compare it to your baseline of accept-
able performance, and use the comparison to determine what component (or
components) has become a bottleneck and what may need to be upgraded.
Once you’ve determined what needs to be upgraded within a server, you
can then begin planning for the actual upgrade. The tasks that need to be
completed before doing the upgrade will be covered in the following section.
What to Upgrade
Upgrade Procedures
As previously mentioned, carrying out an upgrade on a server is not some-
thing to be decided on lightheartedly. A careful plan of the upgrade, includ-
ing a timeline, due dates, and milestones, must be established. During this
planning stage, consideration of compatibility between the upgrade and
existing hardware, software, and operating system must be dealt with, as
well as the clients who access the server. Will their hardware, software, and
operating systems be compatible with the server’s upgrade?
In order to properly and successfully perform a server upgrade, it is
important to follow some sort of procedural checklist. These are just some
basic but important steps that should be followed any time an upgrade
procedure is performed on a server, regardless of whether it is a hardware
upgrade or software upgrade. Preplanning may seem like a tedious task to do
for an upgrade but, when dealing with network servers, it is always better to
be safe than sorry. Take for example an operating system upgrade. This
is one of the most important upgrades that can be performed on a server
Upgrade Nightmare
I remember a few years ago being called out to an office where an operating
system upgrade was performed and they had some problems. Upon arrival
I learned that the office was originally using Windows 98 and wanted better
security and file safety so they chose to upgrade to Windows NT. After the
upgrade, several computers had hardware problems, especially with mice
and keyboards. As it turns out, the hardware in question was interfacing
with the computer through the USB port. Unfortunately, Windows NT does
not support USB interfaces. Had this company researched carefully into the
upgrade, it would have realized this problem before deciding on performing
the upgrade.
Choosing Components
In deciding on an upgrade, be sure to take the time to research numerous
vendors and manufacturers. The multitude of products and manufacturers
may seem overwhelming, but it will give a clear idea of the options and price
range available. During this time, comparison to the components currently
installed in the server will prevent any incompatibility. Most hardware and
software manufacturers have extensive documentation on their websites and
will share potential problems as well as incompatibility and known issues.
Once components have arrived, the next step is to confirm all parts are
accounted for and are not damaged. Copies of invoices and order forms
should be verified and then filed into a documentation folder. Should a time
come when confirmation or verification of parts is needed, referral to these
invoices will be of great assistance.
may have to be taken care of. For example approval of the upgrade from
management or a board of directors may have to take place before the
go ahead.
Scheduling the upgrade entails determining what upgrade steps are going
to be completed at what times and who is responsible for performing them.
The plan that is developed will of course depend on the type of upgrade that
is being performed. A simple upgrade can be scheduled to occur during a
span of a few hours while a more elaborate upgrade may be scheduled to
occur over a span of several days (even months in some cases). Either way,
a schedule of an upgrade organizes the procedures so technical staff—and
maybe nontechnical staff—know what is going to occur and when it will
occur. Depending on the business policies and how relaxed they are, your
schedule may just include the date and time that the upgrade will occur or it
may include details such as when the hardware and software will be pur-
chased, when the testing phase will begin, and when users will be notified of
server downtime.
Backup
Talk to any experienced network administrator and one of the first things
they will recommend before making any changes to a server is to perform a
full backup. This means backing up the entire server. Your backup should
include a backup of the operating system, any applications, and all data
stored on the server.
Once you’ve performed a full backup, you may think that you are ready
to upgrade, but there have been many instances where a network adminis-
trator has backed up data only to find that it cannot be restored. So once
again, to be on the safe side, do a trial run and test the restore. You will want
to have a test machine on which to run the restore, just to make sure the
whole thing works. If you do not have a test machine available, it’s time to
get one.
It’s a good idea to send out multiple messages to your clients warning
them that the server is coming down. Invariably though, after you bring the
server down, you will get a dozen phone calls from frantic employees that
either didn’t pay attention or didn’t save their files.
Obviously, if you are performing an upgrade you are making changes to the
configuration of a server. Preplanning is of the utmost importance, but what
about after the upgrade is complete? It is generally good practice and com-
mon courtesy to document any changes that have been made to the server
and keep the information in a log somewhere secure. If you are the only net-
work administrator, then obviously you are the one who monitors changes.
But if you are not, it is good practice to document the exact changes that
were made. Nothing can unnerve an administrator more than sitting down
at a server only to find significant changes and not know when they were
performed or what was done. Not only is providing documentation a com-
mon courtesy, but it also aids the next administrator in troubleshooting any
problems that may occur after the upgrade has been completed.
Risk Assessment
Any time you perform a server upgrade, there are risks involved that
you need to be aware of beforehand. So during the preplanning phase, one
of the tasks you need to complete (and this can be done when creating a
backout plan) is to perform a risk assessment. A risk can be defined as a
potential problem that may arise during the upgrade. A risk assessment
entails identifying the possible risks of the upgrade and having a contingency
plan in place should they occur. As an example, what would happen if you
upgrade Windows NT to Windows 2000 and the server’s NIC isn’t on the
Windows 2000 HCL? Your overall goal in performing a risk assessment is to
identify the risks, eliminate them if possible, or find ways to eliminate their
impact should they occur. Part of your risk assessment plan should be to
determine how critical the server is to the day-to-day operation of the busi-
ness and how much downtime can be afforded if a problem does arise.
Part of addressing the potential risk factor is determining if the upgrade is
necessary. Do the benefits of the upgrade outweigh the potential risks? It is
not only unnecessary, but also unwise, to install every available update that
you can find. At times, upgrades will conflict. It is very common to see
numerous versions of the same update released over time. Always check the
date on the upgrades to ensure that you are not downloading an outdated
This is one instance where being a little on the paranoid side can be a positive
thing. Always plan for the worst, even though the chance of the worst actually
occurring is minimal. In the end you will feel much more confident when it
comes time to perform the upgrade.
Server Availability
Your ultimate goal is to make network servers available 100 percent
of the time (this is referred to as server uptime) or when users need to have
access to them. If a business is up and running 24 hours a day, 7 days a week,
your network servers will need to be available 100 percent of the time.
Although the goal is 100 percent uptime, in some cases this may be impos-
sible to achieve (take for instance a business with a single server). Obviously
the uptime required by businesses will vary depending on the applications,
services, and data being stored on the server and how critical they are to the
company’s day-to-day operation.
will depend on the role the server plays on the network and how critical it is.
Take for example a file server that is accessed occasionally by users on the
network. In this case, taking the server offline will have little impact on the
users. If another server is available on the network, the files can easily be
duplicated and made accessible during the time the server is offline. What if
the server you are upgrading is a domain controller? Taking this server
offline for any amount of time can have a profound effect on the network
and will mean configuring another server to take its place in the meantime.
When determining how much downtime can be afforded, begin by assess-
ing what role the server plays on the network. Document all the services,
applications, and resources hosted on the server and how they apply to the
operation of the business. Anything that is critical to its operation may have
to be duplicated onto another server while the upgrade is done. Consider a
server that is leasing IP addresses to users on the network. If this server were
to be unavailable for a period of a week while an upgrade is performed, it
would have a major impact on what users could do on the network. The
service would then need to be duplicated onto another network server.
Increasing Availability
Increasing the availability of a server will depend on the environment in
which you are working. If you have access to multiple servers, then it is a
matter of duplicating those services or applications onto another one. The
following list includes some suggestions for minimizing the impact of server
downtime, thereby increasing server uptime.
Schedule upgrades to occur during off-hours making the server
available when users need access to it.
If possible, configure a server to take the place of the one being
upgraded while it is offline.
Take advantage of clustering technology for critical servers.
Take the time to complete the pre-upgrade tasks so you are prepared
for any potential problems that might cause the upgrade to take longer
than predicted.
This is one instance where clustering servers plays an important role. As you
recall, when servers are in a cluster configuration, one is waiting to take over
the workload of another if it goes offline. This includes any planned outages
for upgrade and maintenance procedures. The resources on one server can
failover to another server if the cluster and downtime is decreased to a matter
of seconds (users probably won’t even notice). You can then take the server
off the network and perform the necessary upgrade. Once the upgrade is
complete, the server can reassume its workload and all is well. Clustering is
probably not necessary for servers that play a minimal role on the network.
But for businesses that rely heavily on web servers for e-commerce, it can
essentially mean millions of dollars.
supply and/or the UPS. All servers should be connected to a UPS (uninter-
ruptible power supply). A UPS is nothing more then a battery system that
will maintain power to the server in the event of an electrical failure. The bat-
tery is connected between the server and the electrical outlet. When there is
a power failure, the battery system will run the server and any other devices
connected to the UPS for a period of time. Depending on the size and strength
of the battery system, a UPS may support minutes to hours of power.
Software is also available that will interface between the UPS and server
allowing for alerts and remote notification that the server is running off bat-
tery power. Another key feature that UPS software provides is a shutdown
option for the server. If the electrical power is not re-established before the
battery runs out, the software will alert users, close all software programs
that are running, and then power down the server operating system. This
prevents the hard shutdown that occurs when power fails and all open
programs including the operating system are abruptly shut down. Hard shut-
downs can cause loss of data as well as program corruption. If the UPS that
the server is currently using is underpowered, then an upgrade to the UPS
will also have to be made. UPS software also monitors the power load being
placed on the UPS battery. If the load exceeds the recommended battery
load, then the UPS should be upgraded. Anytime a hardware upgrade is
performed, the UPS load should be checked to confirm that it is capable
of handling the stress of the added component.
Working in conjunction with a UPS should be a surge protector. Some
manufacturers combine both a UPS and surge protector in one, but it is
advisable to purchase a dedicated surge protector. A surge protector has a
built-in circuit breaker that will trip should a spike of electricity arrive. This
prevents the momentary increase in electricity from damaging the computer.
Many surge protectors offer modem and network protection along with elec-
trical protection. This insures that any potential damage traveling down a
telephone or network line will be caught before it can reach the computer.
All sensitive equipment should be protected by a surge protector (including
computers, printers, fax machines, and networking products).
Software upgrades include patches, new software, upgrades to existing
software, and firmware. Patches are small programs that are installed to
repair or add features to existing software. An example would be a patch for
a database program to repair a known problem. Normally patches are sim-
ple to install, using an executable file that does all the work, but at times
patches will require you to manually install files or overwrite existing files.
New software includes any program, driver, or file being installed to the
server that has not previously been installed. Research to ensure compatibil-
ity with other software and the operating system is a must. Firmware (as
discussed in Chapter 3, “Motherboards and Processors”) is software that
controls hardware. Firmware updates on a server include SCSI controllers,
RAID controllers, tape drives, and CMOS BIOS. Although other hardware
devices have firmware, these four are the most common firmware updates.
When installing firmware updates, be sure to obtain the software from the
product manufacturer. The potential risks of component damage or inoper-
ability should be a deterrent enough to not download firmware updates from
just any Internet site. Manufacturer’s sites design and verify firmware updates
specifically for their products. By downloading from the manufacturer’s site,
you can be reasonably certain about the reliability of the firmware.
Operating system upgrades and updates normally require purchasing the
update on a CD-ROM or downloading it from the operating system website.
Similar to software updates, operating system updates include repairs to
known problems as well as enhancing features. Again, before installing any
update, confirm that it will be compatible with your existing hardware and
software. Windows 2000, for example, offers updates called service packs.
Two service pack updates have been released to deal with known operating
system problems. Operating system updates are available for Windows,
NetWare, and Unix based systems. Frequently, visiting the operating system’s
website will help keep you up to date on the current releases as well as their
role. New operating systems such as Windows 2000 also offer notification of
new patches and service packs. This alert can come in the form of an e-mail
or a popup notification in the system tray.
With this management feature you can verify software and hardware
operations. The event viewer will assist in identifying potential problems with
applications, Internet Explorer, security, and system. Figure 9.5 shows errors
generated by the system log. Notice how errors are shown as an X in a circle
graphic in the right pane, while warnings are an exclamation point in a triangle.
Documentation
Once everything is returned to normal and the server is fully functional
again, a detailed documentation of the upgrade should be created. This
information can assist at a later date should the need to retrace steps occur.
Also, any issues that came up as a result of the upgrade should be outlined.
This documentation should be kept in a safe location and include the plan-
ning stage as well as all shipping and order invoices. Be sure to outline the
upgrade procedures clearly, including screen shots of error messages and
detailed steps to rectify the errors.
Unsuccessful Upgrades
F or those of you who have experience performing server upgrades,
you know that there are times when the upgrade will fail, regardless of how
much time and preparation went into planning. Hardware and software
upgrades can be unpredictable and when it comes to performing upgrades,
sometimes failure is an option. So at some point you may have to stop trying
to fix a failing upgrade and begin the recovery process.
The effects of a failed upgrade can certainly be minimized by your pre-
planning tasks. This is when you can refer to the backout plan that you
developed. The backout plan is going to tell you how to recover from the
failed upgrade and restore the server to its original state (this maybe through
restoring from a backup tape or reverting back to the original hardware).
The backout plan should also give you an indication as to when it is time to
stop the upgrade and begin restoring the server. For example, if you have
scheduled the server to be unavailable for a specific amount of time and are
reaching the time when it should be brought back online, it is probably time
to stop trying to fix the upgrade and start trying to bring the server online
again. As difficult as it may be, especially after the time and effort that is put
into planning for the upgrade, at some point you will need to make the deci-
sion to cut your losses and start your restore.
One of the most important steps you can take after a failed upgrade is to
document the entire process. All is not lost from a failed upgrade and it
should be looked upon as a learning experience. Document as much of the
upgrade process as possible, such as when it failed, the steps you took to fix
the problem, error messages that were generated, and any other information
that will help you to determine why the upgrade failed. Chances are you will
attempt to perform the upgrade again in the future and this information
will useful for troubleshooting.
Maintenance
R egular maintenance on a server ties in closely with upgrading.
Regular maintenance includes monitoring server functionality as well as
performing regular routines to maintain server operations.
Proactive Maintenance
Proactive maintenance is performed to prevent problems from occurring. An
example of a common proactive maintenance activity is cleaning backup tape
drives regularly. By doing this simple activity you can prevent backup failures
due to dirty or magnetized heads on the tape unit.
You should only use compressed air designed for computer use. Air compres-
sors will use pressurized air that may contain contaminates such as oils.
Using suction devices such as vacuum cleaners can cause harm by actually
ripping delicate parts off.
Air filters can also be installed over the fan intake grill. These filters trap
the dust particles before they can enter the fan assembly. Since servers nor-
mally contain several fans that move air throughout the server, the use of fil-
ters can be a good idea. These inexpensive devices can prevent dust buildup,
especially in areas that are not readily accessible for cleaning. However, if
you’re going to use them, make sure to change them frequently. The last
thing you want is for them to get clogged up, causing the system to overheat.
Probably the most common known area of proactive maintenance is in
virus protection. Viruses are a threat to any computer, and with new viruses
emerging weekly, they should be taken seriously. Virus protection in a net-
work can be based individually on each computer, or server-centric. Server-
centric virus program are called a virus protection suite. This software offers
centralized protection and updates. The real benefit of a virus protection
suite occurs when the time to update the virus definition comes. A virus
definition is a list of the known viruses that is used by the virus protection
engine to monitor your computer. If the virus program is individually based,
Baselines
Baselines are measurements of server performance. This measurement is
taken over a period of time to determine how well a server will handle appli-
cation and stress loads. Based on the information gathered, a baseline of
performance can be set. This will help distinguish acceptable server perfor-
mance from unacceptable performance when the server is operating under
normal and heavy loads.
One of the key tools used in creating baselines in Windows 2000 is the
System Monitor. As discussed earlier in this chapter, the System Monitor
provides a means of visually monitoring how a server is doing. Using the Sys-
tem Monitor to record server performance a baseline is easily created. This
Computer Management feature will even alert you if the performance dips
below the established baseline.
Thresholds
Thresholds are set values similar to baselines; however, with thresholds,
there is an acceptable range of values. This set minimum and maximum
range conforms to a safe operating limit for the computer component.
Summary
T his chapter began with defining assessment. Understanding the need
to assess, both as it relates to the need for upgrading as well as system mon-
itoring, is the key to this chapter.
Assessment in the Windows 2000 environment is done through the use
of Computer Management program. Computer Management provides a
means of assessing hardware and software and their interactions with the
operating system.
Upgrades can take on the form of software, hardware, or operating
systems. Although each upgrade is unique, the stages involved in performing
the upgrade can be broken down into three key areas: planning the upgrade,
performing the upgrade, and managing the system after the upgrade. Plan-
ning the upgrade includes determining due dates; researching products and
vendors; confirming compatibility with existing hardware, software, and
operating systems; verifying component delivery; and thoroughly reading all
documentation. Performing the upgrade includes performing a full backup,
being aware of ESD, and actually doing the upgrade as planned. Managing
the system after the upgrade covers making sure that the upgrade was per-
formed successfully, verifying that there are no negative effects as a result
of the upgrade, removing any old hardware or software, returning access to
clients, and finally documenting the details of the upgrade.
Server availability and upgrade failures also need to be considered before
performing an upgrade. When planning for an upgrade, you will need to
assess the business requirements as well as the role the server plays on the
network to determine how much downtime can be afforded. Upgrade fail-
ures are bound to happen, and this is where your preplanning will be useful.
Creating a backout plan prior to the upgrade will help you to recover the
server if indeed the upgrade is unsuccessful.
Exam Essentials
Know the benefit of monitoring tools. Monitoring provides a means of
watching and assessing server performance.
Know how to use Windows 2000 System Monitor. Be familiar with
the available counters in System Monitor as well as how to add counters
and how to monitor performance.
Know the difference between counter logs, trace logs, and alerts.
Counter logs record information gathered by the performance monitor,
trace logs track changes and actions, and alerts send out a notification
when a threshold is reached.
Key Terms
B efore you take the exam, be certain you are familiar with the
following terms:
alert monitoring
assessment monitoring agents
baseline patches
bottleneck POST (power-on self test)
Counter Logs Monitor server performance
driver signing surge protector
ESD threshold
firmware Trace Logs Monitor
hard shutdown upgrading
HCL (hardware compatibility list) UPS (uninterruptible power supply)
log files virus protection suite
Review Questions
1. Which of the following is not a means of assessing and maintaining
a server?
A. Establishing baselines
D. Visual cues
A. Once
B. Weekly
C. Monthly
D. Constantly
B. Device Monitor
C. System Manager
D. System Monitor
4. What is an alert?
5. What is a bottleneck?
6. Which of the following is not one of the three main types of upgrades
that can be performed?
A. Hardware
B. Software
C. Operating system
D. Virus
A. Electrostatic discharge
A. Differential
B. Partial
C. Incremental
D. Full
C. A type of virus
A. A type of patch
13. Operating system updates are available for which of the following
server operating systems?
A. Windows
B. NetWare
C. Unix
15. Mike is doing a software upgrade to a server. He’s done installing the
software and testing the server and is just about ready to put the unit
into production. What one last item should he take care of before
making the server a production server?
A. Download and apply any service patches.
16. What is one way you can ensure that you will be able to recover a
server from a failed upgrade attempt?
A. Read the technical information associated with the upgrade.
17. Users on the network have been complaining that one of the servers
has been very slow lately and seems to be getting worse as the weeks
progress. What is your first step in alleviating the problem?
20. D. The final step in any upgrade should be documenting all the
details of the installation.
Before Upgrading
B efore you perform a hardware upgrade, you need to do a little
planning:
Research the proposed upgrade to determine whether it is, in fact,
what is needed to improve the network performance.
Read the documentation thoroughly before beginning the upgrade.
Make sure you are familiar with each step in the process.
Make sure the hardware you choose is supported by the operating
system running on the server.
Be prepared to comply with ESD (electrostatic discharge) best prac-
tices, including the use of antistatic bags, and ESD wrist straps and
mats to prevent static charges from damaging delicate hardware com-
ponents. ESD damage is a serious threat to computer hardware and
can lead to hardware operation problems.
If the upgrade requires taking the server offline, plan a convenient time
to do this and notify your users in advance.
Adding a Processor
O ne of the most common components to be upgraded in a server is
the processor, whether you are upgrading a single processor or upgrading to
multiple processors.
In some cases you might be better off upgrading to a new server rather than
just upgrading the processor. This will depend largely on whether or not your
server’s motherboard supports current technologies.
with a new fan and heat sink, but even if you need to buy them, the cost will
be small compared to possible losses from overheating. Fans and heat sinks
tend to have a short life span. Check with the manufacturer as well because
using a fan and heat sink that have not been tested with the processor may
mean that your warranty on the new CPU becomes void.
As with any other upgrade, begin by reviewing the documentation that
comes with the hardware for procedures on how to add the component and
perform a full backup of your server.
Make sure that you correctly insert the processor. Be very careful to seat the
pins correctly before applying pressure; the pins bend easily. Make sure pin 1
on the new chip is plugged into the pin1 socket. Turning the server back on
while the chip is inserted incorrectly can permanently damage both the
motherboard and the processor.
Troubleshooting
The most common problem you will face when troubleshooting a newly
installed processor is the server’s failure to start. If this is the case, you will
need to put your troubleshooting skills to work and determine where the
problem lies. Start with these procedures:
Check to see that the newly installed processor is properly seated in
the socket or slot.
Check the documentation to verify that the jumper settings are prop-
erly configured.
Remove the processor to make sure that none of the pins are damaged.
Check whether a BIOS upgrade needs to be performed.
Check the manufacturer’s support site to see if there are any known
issues.
If all else fails, you still have the old component to fall back on. Reinstall
the old processor—if the server starts, then the new processor is probably
faulty and needs to be returned to the manufacturer.
After you determine that the upgrade has been successful, start baselining
again to ensure that the new processor is meeting your performance expec-
tations as well as the required demands.
Multiple Processors
A processor upgrade might very well include upgrading from a single pro-
cessor to a multiprocessor or upgrading multiple processors in a system.
In today’s server environments, most servers are indeed multiprocessor sys-
tems. Your first consideration will be whether or not your operating system
and motherboard support multiprocessors.
Stepping
Another consideration in multiprocessor upgrades is processor stepping.
Stepping is similar to version numbers: as updates are made to chips, the
version numbers change. You’ll want to consider processor stepping partic-
ularly when upgrading a single processor system to a multiprocessor one.
Mixing processor steppings does not always work well if it works at all. The
general rule of thumb is one stepping (revision) between CPUs is acceptable.
Information on stepping compatibility can be located from the manufac-
turer’s website. Try to purchase a chip with the same stepping, although if
you are dealing with an older chip this may be difficult. Some operating sys-
tems will be more tolerant when mixing steppings than others will be.
Your first step in upgrading will be to determine whether you are dealing
with SCSI or IDE hard disks because this will impact which type of disk
you purchase and the procedure for installing it.
Some SCSI IDs will not be set using the jumper pins. For example, if you are
adding a SCSI disk to an existing hardware RAID implementation, the RAID
system itself will assign the disk an ID.
Termination also needs to be considered. Most SCSI devices are now self-
terminating but you will need to consult the manufacturer’s documentation
to determine this (there may be a termination jumper pin that needs to be
set). Usually the SCSI adapter and the last disk on the chain are terminated.
physically installing the disk, set the jumpers to establish the master/slave
relationship using the manufacturer’s documentation.
If you are installing a second drive from a different manufacturer, verify its
compatibility with the existing drive. Also, to save yourself some headaches,
make sure there is adequate cabling before actually beginning the upgrade.
Once the jumpers are set, the drive can be mounted into an empty bay and
the IDE cable attached to the new drive, making sure that pin 1 of the cable
is matched up with pin 1 on the drive. Once the drive has been installed, the
server can be restarted. Most servers will autodetect the drive, but you
should enter the system’s BIOS to ensure that the new drive is listed. The
drive should also be listed during the bootup process if it has been installed
correctly. Your final step will be to format and partition the drive; how you
do this will depend on the operating system installed.
Troubleshooting
Troubleshooting an IDE upgrade can usually be resolved by answering these
questions:
Does the current BIOS support the size of the hard disk? (If not, a
BIOS upgrade will be necessary.)
Has the master/slave relationship been properly configured?
Is the cable properly connected? Has pin 1 on the cable been matched
to pin 1 of the hard disk?
Is the power connected?
Increase Memory
Operating systems and software applications have RAM (random-
access memory) requirements that must be met before the software will run
properly. It seems as though each new software release needs more memory
than the one before. This is probably the main reason to upgrade the server’s
RAM. Fortunately, memory is one of the easier upgrades to perform. The
first things you want to check are the available space in the server to add
more RAM and the type of RAM currently installed. For example, if the sys-
tem supports up to 512MB of RAM and there is already 256MB installed,
obviously the maximum you can add is another 256MB. Before purchasing
RAM, answer the following questions:
How much more RAM can the server support? This can quickly be
determined from the server’s documentation or from the manufac-
turer’s website.
What type of RAM (SIMMs [single in-line memory modules]
or DIMMs [dual in-line memory modules]) is currently installed
in the server? (Do not mix EDO and non-EDO RAM or ECC and
non-ECC RAM.)
What is the speed of the existing RAM? (The RAM that you add to
the computer must match the speed of the existing RAM.)
What type of contacts are used? (DIMMs use gold for all contacts but
SIMMs can use tin or gold. Be sure the new RAM uses the same metal
as the existing RAM.)
Once you have determined the amount of RAM that can be added and
the specifications of the installed RAM, you are ready to make the purchase.
When purchasing new RAM, it is always recommended that you buy from
a reputable manufacturer. If you do decide to purchase off-brand RAM,
make sure you read the server’s documentation first to ensure that doing so
will not void your warranty. Keep in mind that some servers require the
RAM (usually SIMMs) be installed in pairs, but you can verify this through
the server’s documentation.
Make sure to check off-brand RAM for compatibility with your current system.
When you are ready to install RAM in the server, power off the server and
disconnect the power to the motherboard (you may need to disconnect a few
cables to get to the socket). Remove the existing RAM and install the new
memory module. It is fairly straightforward to do since the module can fit in
only one way.
After you perform the RAM upgrade, you may get an error message during
the POST informing you of a mismatch error. Don’t panic yet: Simply go
into the BIOS and verify that the new RAM is recognized, restart the server,
then save the changes and exit. This should clear the error message.
Troubleshooting
The following are some general things to consider when troubleshooting a
RAM upgrade:
Is the RAM properly seated and inserted all the way into the socket?
Try placing the RAM into a different socket.
Does the server boot with the old RAM alone?
Does the server boot with the new RAM?
Does the new RAM meet all the requirements to co-exist with the
original RAM?
Consider that the RAM may be faulty.
BIOS/Firmware Updates
Firmware (as discussed in Chapter 3, “Motherboards and Proces-
sors”) is software that controls hardware. Firmware updates on a server
include SCSI controllers, RAID controllers, tape drives, and CMOS BIOS.
Although other hardware devices have firmware, these four are the most
common firmware updates. When installing firmware updates be sure to
obtain the software from the product manufacturer. The potential risks of
component damage or inoperability should be a deterrent enough to not
download firmware updates from just any Internet site. Manufacturer’s sites
design and verify firmware updates specifically for their products. By down-
loading from the manufacturer’s site, you can reduce your risk.
One of the most common upgrades will be to the CMOS BIOS. Most
mainboards now use a flash ROM that can be reprogrammed countless
times using a flash utility. This means all you have to do is run an update util-
ity to upgrade the BIOS and the software will make all of the necessary mod-
ifications. You will need the make and model of the mainboard and the
revision number to locate the correct flash update and utility from the man-
ufacturer’s website. The download should contain data files, a flash utility,
and a Readme file. Flashing is not the only way to upgrade the BIOS but this
is the most common method with newer servers.
Make sure you download the correct BIOS update for your server. Flashing
the BIOS with the incorrect upgrade can leave your server unbootable.
A firmware upgrade can potentially leave your server unbootable so, like
any other upgrade you perform, a full system backup should be done before
proceeding. Also before upgrading the firmware, make sure to document
the current CMOS settings. Since some flash utilities clear the CMOS RAM,
you may need to restore some of your CMOS settings after the upgrade
is complete.
4. Restart the server using the floppy and start the upgrade as outlined in
the manufacturer’s instructions.
5. Once the upgrade is complete, remove the floppy and restart the
server. The new BIOS version should be displayed on the screen.
Proceed to the CMOS settings and reconfigure your parameters.
Again these steps are going to vary by manufacturer. Some may require the
CMOS settings to be cleared and others may require power to be removed
from the motherboard after the upgrade for a short period of time. This is why
it is important to carefully review the Readme file. If the upgrade is carried out
incorrectly, your server may be unbootable in the end.
Upgrading Adapters
Upgrading adapters can include network interface cards, RAID con-
trollers, and SCSI cards. The upgrade may be in the form of a firmware
upgrade or replacing the old adapter with a new one. Either way, the pro-
cess of upgrading an adapter is fairly straightforward. If it is relatively
new hardware you are dealing with, you will probably be looking at a
software upgrade. If you are dealing with legacy hardware that is becoming
a bottleneck and performing poorly, chances are you will be looking at a
hardware upgrade.
Network Adapters
One of the most common adapters to be upgraded is the network adapter. In
most cases it is a fairly straightforward process, except of course when the
card is installed and doesn’t work. Then your troubleshooting skills once
again come into play.
When upgrading your network adapter, begin with a visual inspection of
the server and determine the type of slots available. Chances are you will be
using a PCI network card so you need a PCI slot available. You also want to
avoid resource conflicts, so determine what IRQs, I/O addresses, and mem-
ory addresses are available. Tools such as Microsoft’s Device Manager can
be used to determine what resources are available.
Install the network adapter (using ESD best practices) by removing the
metal plate if necessary and inserting the card. Once the card is seated all the
way into the slot, it should be secured using a screw. Not doing so may result
in the card creeping out of the slot and no longer working (or worse, causing
a short inside of the server). Once the NIC is installed and the server is reboo-
ted, you should verify that the link light on the card is lit.
Some NICs will come with diagnostic software that you can use to test that the
NIC is functioning correctly. The software can verify that the different compo-
nents on the NIC are functioning and can also provide diagnostic reports. The
software diagnostics can test network connectivity. If you don’t have diagnos-
tic software, one of the simplest tests to make sure the NIC is functioning is to
log onto the server from a client desktop or, if you are running TCP/IP as the
protocol, you can use the PING utility. If you are unsuccessful in pinging a
host, check the IP address, the speed the network card is set to, and verify that
it is not having a resource conflict with another device.
Summary
In this chapter you learned about some of the common hardware upgrades
that are often done to a server. CPUs, hard disks, and memory are the three most
common components to be upgraded in a server to improve performance.
All motherboards have limitations so, when upgrading a processor, con-
sult your documentation to determine the maximum speed supported by the
board. Also consider the design of the board—determine whether it is a slot
or socket design; this will impact on the type of chip your purchase. If you
are upgrading to multiple processors, keep in mind that the recommended
stepping between processors is one step.
When upgrading hard disks, begin by determining whether you will be
dealing with IDE or SCSI disks. If you are dealing with SCSI, you need to pay
attention to termination issues and the SCSI ID assigned to the new device.
With IDE disks you need to pay attention to the master/slave relationship.
Before going ahead and increasing the memory in a server, you need to
first assess the RAM that is currently installed. Consider how much memory
can be added and the type and the speed of RAM already present. The type
of RAM you choose should be supported by the manufacturer; some war-
ranties will be void if RAM from another manufacturer is used.
Firmware updates are applied to fix bugs and to take advantage of new
technologies. Two most important things to keep in mind are to download
the correct BIOS version for your system and to make sure not to interrupt the
flash process; interruptions can leave your system unbootable.
A UPS upgrade can involve upgrading the UPS battery, upgrading the UPS
software, or replacing the entire system with a new one.
Once a server has been installed and configured, it needs to be monitored
and maintained on a regular basis to ensure it continues to perform optimally
over time.
Exam Essentials
Know the general procedures to use when upgrading hardware. There
are some best practices (such as ESD) that should be adhered to when per-
forming the upgrade of any hardware component.
Know what to look for when upgrading a processor. Understand the
different things to check for when upgrading a processor or adding an
additional processor to a multiprocessor system.
Key Terms
B efore you take the exam, be certain you are familiar with the
following terms:
Review Questions
1. It has become clear that your server needs a firmware update. You are
debating when to apply this update. Which is your best option?
A. During lunch when very few users are accessing the server.
2. A colleague comes to you, the lead technician, and says she noticed on
the hardware vendor’s website that there is a new firmware upgrade
available for your server. She thinks you ought to apply it. What will
you do? (Select all that apply.)
A. Apply the upgrade at once.
C. NOS limitations
D. Bus limitations
C. Make sure you know how to reverse the procedure in case some-
thing goes wrong.
D. Read the Readme to find out what is involved with the upgrade.
E. Keep the server online throughout the entire procedure so users
have access to their data.
7. You’ve recently changed out your server’s IDE hard disk with a new
one but you can’t seem to get the hard disk to come up and be recog-
nized. There is an IDE CD-ROM in the system as well. What could be
the problem? (Select all that apply.)
A. BIOS doesn’t recognize the correct cylinders and heads.
B. CD-ROM is set to be master.
8. You need to upgrade the firmware in your server. What is the most
likely scenario for you to proceed with your upgrade?
A. Power the server off and upgrade the firmware.
9. What is likely to be your best source of information about the tasks the
firmware upgrade will accomplish and how long it will take?
A. The website at www.firmware.com
B. The documentation that came with the server
11. Which of the following should you consider when troubleshooting the
installation of a SCSI disk?
A. Master/slave relationship
B. SCSI IDs
C. Termination
D. Cables
13. Suzanne is working on a server that has four slots in it for DIMMs.
Two of the slots have 64MB DIMMs in them already. Suzanne wants
to add a 128MB DIMM, giving the system 256MB of total system
memory. When she adds the DIMMs, the power-on self-test memory
count shows the full 256MB but she now gets an error telling her to
adjust the BIOS. What could be the problem?
A. Nothing’s wrong.
14. You have a server that is RAM-starved. You purchase a DIMM from
a reputable memory manufacturing company, install it, and find that
the system won’t boot up. What could be the problem? (Choose all
that apply.)
A. The type of memory you bought isn’t supported by the computer
manufacturer.
B. System requires DIMMs to be installed in pairs.
B. Two steps
C. Three steps
D. Four steps
C. RAM stepping
D. Motherboard design
17. One of Wendy’s file and print servers has a very old Future Domain
SCSI I adapter in it and she thinks that by replacing the adapter she
can enhance the throughput of the disks and speed up the computer’s
operation. What are some concerns that Wendy must keep in mind as
she considers the upgrade?
A. Updated cabling
B. IRQ issues
18. How will you know when your computer needs a firmware update?
(Choose one best answer.)
A. Keep an eye on the hardware vendor’s website for release of
updates.
B. The hardware vendor will notify you by mail.
C. Your server will notify you via a message during a cold system
restart.
D. The server’s log file will indicate that firmware is out of date.
19. Luigi is going to add a second IDE hard disk to his server. He has a
Seagate 7.6GB hard disk in the system now and intends to install a
Maxtor 14.2GB to the computer. Besides the master/slave relationship
he has to be concerned about, what are some other issues?
A. Compatibility of vendors
D. Termination jumpers
20. You suspect that your server is RAM-starved so you order some more
RAM sticks and add them to the system. You’re startled to find out
that the OS doesn’t report the additional memory. What could be the
problem? (Select all that apply.)
A. Incompatibility in the RAM chips.
B. RAM is faulty.
two, devices to be added onto the chain. But Wendy might have sev-
eral devices and will have to provide an additional cable. Also she’ll
probably want to consider verifying the card’s BIOS version and
updating it if it’s older than the current one. The most important
concern she should have will be verifying if there are available PCI
slots in the server. The Future Domain adapter is likely to be an ISA
or EISA card but most of today’s cards are PCI. This could be a big
concern if the server is an older pre-PCI unit.
18. A. Check the hardware vendor’s website regularly to see if any
upgrades have been posted. If so, you should start your investigation:
Read the Readme to see what the upgrade fixes. Determine if your
system is having any symptoms that might indicate a need to apply the
upgrade. Then make your decision.
19. A, B, C. Often overlooked, but important, is the need to make sure
the disparate vendors’ disks will play in the sandbox with each other.
Also check to make sure the cabling is adequate and doesn’t need
replacement and whether there are other IDE devices in the system.
Finally, it might be important to figure out if you’re going to lose
your IDE CD-ROM because you’re adding a second disk and you’re
working with an older IDE bus that only supports two devices. You
don’t need to worry about termination jumpers with IDE devices.
20. A, B, C. Even if the BIOS hasn’t yet been updated with the new
RAM numbers (most BIOS utilities today won’t let you exit without
updating their configuration), the OS should report the new amount.
You’ve either got a bad RAM stick, an incompatibility, or the OS can’t
handle that much RAM.
You are the network administrator for a management consulting firm. Your
network has been running an older version of a network operating system
for about four years. Newer versions are out, and they all have new bells
and whistles that you would love to have: things like integrated manage-
ment tools, better user interfaces, and greater reliability. You know that you
need to upgrade, so what’s the problem? The problem is, management
does not want to spend the money.
Most likely, you are a techie who loves technology and especially enjoys
learning about and working with new technology. If you could make the
decision yourself, you would always upgrade your OS to the latest and
greatest just because of the many new features that are available in the new
OS. However, the problem is that in most companies, you are not going to
be the person who makes that decision, and this is for one simple reason—
you are not the one paying for the upgrade. The people who actually pay for
the upgrade are usually not techies—they are business managers. They
may be very interested in technology and see many benefits in using tech-
nology, but they are not as likely to be impressed by the new features that
techies find so interesting. The business people are usually interested in
technology for one simple reason—technology assists the business in mak-
ing more money.
So what do you do? You need to learn to speak business language. There are
very few companies who will upgrade the operating system on all their serv-
ers just because the new OS is the latest and greatest. In most companies,
the OS will be upgraded only if you can make a very clear argument that the
company will get a return on investment (ROI) on the upgrade. If you think
that the company should adopt the new OS because it is more reliable and
stable, you may have to talk to the business people about the benefits of the
computer system being available all the time, and the cost of any downtime
for the organization. If you think that the new OS provides better security,
then you may need to talk to the business people about the security threats
to their confidential information and how the new OS can help secure essen-
tial data. When talking to the business people about the upgrade, your task
is to demonstrate how the new OS will solve business problems. You know
that the technology is needed, and it will benefit the company. Now your
task is to prove that to the people who handle the money.
If you decide to reformat the hard disk on the server and perform a clean
install of the OS, you will find that almost all operating systems provide
in-place upgrade functionality. Often, you can start an install of the newer
version of the OS on the existing server and the installation process will detect
the presence of the older OS and ask you if you want to perform the upgrade.
When you perform an upgrade, the configuration settings for the old OS are
migrated to the new OS so you do not have to reconfigure many of your set-
tings. The data that is stored on the server and the applications that run on
the server are usually not affected by the upgrade. In many ways, an in-place
upgrade is the easiest option when you need to upgrade your OS.
The alternative to an in-place upgrade is a clean install of the OS. There
are actually two ways to accomplish this. One option is to move all of the
data that you want to keep from the server onto a backup tape or to another
network location and then to format the hard disk on the server and perform
a complete install of the OS. After you perform the install, you can then rein-
stall any applications on the server and restore the data back to the server.
The second clean install option is to use a second computer. In this case,
you would install the new OS on the second computer and then move the
data and resources from the old computer onto the new computer. Users
would then connect to the new computer to get access to the resources. You
can then format the hard disk on the old computer and use it for another
purpose on your network or in your test lab.
It’s important to know the advantages and disadvantages of each type of
upgrade. Since in-place upgrades are the most common, let’s take a look at
them first.
This is usually the fastest upgrade path. If all goes well, you can have
a new version of the operating system installed within a couple of
hours.
Because all of the server settings such as computer name and IP
address are unchanged by the upgrade, you do not have to reconfigure
any clients that are currently connecting to the server.
Of course, no method of upgrading is perfect. When selecting an in-place
upgrade, be aware of the following potential disadvantages. Not all will
apply to your network, but some may. If the disadvantages are significant,
you may want to choose another method. Disadvantages include:
This process has the highest risk factor. While almost all operating sys-
tems do provide an upgrade path, the chances of something failing
during the upgrade are significant. The OS manufacturer tests the
upgrade thoroughly before releasing the product, but it is physically
impossible for the manufacturer to test all possible situations. If the
in-place upgrade fails and you have not prepared for this, you will
have to deal with a long and painful recovery process that may include
losing some data.
One of the reasons why server upgrades fail fairly frequently is because every
server has a history. This history may include OS upgrades, service pack
installations, application installations and removals, unauthorized and undoc-
umented server configurations, as well as several years of service. There is no
way that the OS manufacturer can perfectly duplicate your server’s history
to test the upgrade path, yet anything in that server’s history may cause the
upgrade to fail.
The server is not available during the upgrade. Even with the smooth-
est upgrade, the server will not be available for client connections for
several hours. If the server is running a business critical application,
you will almost certainly be doing the upgrade after everyone else has
gone home for the weekend.
Upgrading the OS may bring with it a legacy of problems. In many
cases, performing an upgrade of the OS leaves you with a series of
problems. In some cases, the problems are a result of an incorrect con-
figuration in the old OS—upgrading the OS will never fix an incorrect
configuration.
The OS upgrade itself may create problems. For example, if the old OS
used a particular system file and the new OS overwrote that system file
with a newer file with the same name, any applications or services on
the server that depended on the old version of the file could fail.
The old server may not be able to run the new OS. Most of the time,
newer versions of an OS require more hardware to run efficiently than
earlier versions of the same OS. Before starting an in-place upgrade,
ensure that the old hardware is supported in the new OS.
Some client applications on your network may require that the server
component be located on a computer with a specific computer name
or IP address. Before you choose the option to install the OS on a new
server and migrate all of the resources to that computer, you must
ensure that these applications can be modified to point to the new
server.
If you choose to install the new OS on a new server, you will require
at least two servers.
The in-place upgrade option is rarely the best option. One scenario where
this option would be the first choice is in a small company that has only one
server and cannot afford the downtime to perform a complete installation of
the OS and all applications.
OS Upgrade Procedure
Although an in-place OS upgrade is not usually the best option, there are
situations where you may want to choose this approach. If all goes well,
an in-place upgrade of the OS is the easiest upgrade option. For example,
to upgrade from Windows NT to Windows 2000, you can insert the Win-
dows 2000 CD-ROM, wait for the Autorun screen to appear, and essentially
accept all the defaults. (See Figure 11.1 for an example.) Upgrading a Linux
or NetWare server is almost as easy. However, this perception of ease can be
misleading when you are planning to upgrade a production server. The rea-
son why this perception is misleading is because the in-place upgrade is also
the highest risk upgrade.
The risk of performing an in-place upgrade is that the upgrade will fail:
You start the upgrade, everything works smoothly, and then the upgrade just
stops. Or the server may not boot into the new OS. If you have not prepared
for this to happen, you are about to spend a long night trying to get your
server back in working condition before everyone else shows up for work the
next morning.
Almost all of the work in performing an in-place upgrade is done before
you ever start the upgrade. You can split the preparatory work into two cat-
egories: first, the work that you will do to try to make sure that the upgrade
will succeed, and secondly, the work that you will do to make sure that you
can recover your server if the upgrade fails.
tested many upgrade scenarios, but they will not have tested the
upgrade on a server with your server’s history or with the particular
combination of applications on your server.
3. Clean up the server. You will increase your chances of a smooth
upgrade significantly if you clean up the server before the upgrade.
Cleaning up the server may include removing unnecessary applica-
tions, removing applications that will not run on the new OS, and
removing data that does not need to be stored on the server.
4. Remove or disable any antivirus software running on the server. Also
disconnect the UPS. Some operating system installs will make certain
UPSs shut down the server. Just make sure you re-enable the antivirus
software and UPS when you’re done.
5. Test the upgrade. Before you perform the upgrade in a production
environment, perform the upgrade repeatedly in a test environment.
The test environment should match the production environment as
closely as possible. If possible, use the same hardware and make sure
that all applications and services running on the production server are
also running on the test server. Performing the upgrade on a test server
first allows you to determine whether the upgrade is likely to succeed,
but it also might give you some experience in troubleshooting minor
issues that appear during the upgrade.
6. Read the instructions. This may seem self-evident, but it is often over-
looked. The first place to begin is to review documentation provided
by OS manufacturer about the upgrade process. This documentation
is usually included with the source files for the new OS in the form of
Readme files. Often the support component of the company’s website
includes additional information in the form of white papers, technical
documentation, and troubleshooting information. A second source of
instructions is the hardware manufacturer’s documentation. There
may be specific issues of running the new OS on your particular hard-
ware—often the hardware manufacturer has experienced the same
problem and may provide a fix or workaround on their website.
7. Learn from other people’s experience. You are probably not the
first person to try this particular upgrade, so try to find out how the
upgrade has worked for other people. All OS and most hardware
manufacturers provide a forum for customers to share experiences
about the products. These are usually accessible through the company
websites. On these sites you can gain access to FAQs and company-
sponsored newsgroups. (See Figure 11.2 for Novell’s collection of
newsgroups for NetWare.) Often there are also public newsgroups
focused on the products that you are working with. These newsgroups
often provide extremely valuable information about what other peo-
ple have experienced and how they managed to get around problems.
One of the best ways to perform a complete backup of your server is to use a
disk cloning application to create a complete copy of your hard disk. Then if
the upgrade fails, you can restore the server back to its previous state quickly.
If the server that you are upgrading is a file server with hundreds of gigabytes
of data on its hard disks, just clone the OS partition and make sure you have
a good backup of the data. Two of the most popular disk cloning applications
are Ghost (www.symantec.com) and Drive Image (www.powerquest.com/
driveimage).
Other Considerations
As you get ready to perform the upgrade, there are a couple of other issues
that you need to take care of. The first step is to schedule the server down-
time. In most cases, operations like OS upgrades are performed during non–
business hours. The server will not be available to users during the upgrade,
so it makes sense to perform the upgrade when fewer users need access to the
server. Even if you are planning on performing the upgrade when no one else
has to be at work, you should still let users know that the server will be
down. A user who decides to come in on a Saturday to get caught up on some
work will be very disappointed if the server they need is not available.
Some companies have little or no time when the server can be unavailable.
For example, companies that depend heavily on business generated from
their websites or companies with offices in many different time zones might
not want the server to be unavailable at any time. If your company does not
have a time window where you can perform the upgrade without causing a
significant disruption in the business processes, then do not perform an in-
place upgrade. In this case, you should perform a clean install using a second
server to provide the service while the original server is not available.
Another important component to a successful upgrade is to document
everything. As you perform test upgrades in your lab, you should document all
of the upgrade procedures you use. You should also test and document your
recovery plan. It is very easy to miss a step in a complicated procedure if you
don’t have accurate documentation. As you perform the upgrade, document
any errors you encounter and how you resolved the errors. This will become
valuable documentation for any future upgrades that you perform.
Many of the steps discussed in the previous section on in-place upgrade also
apply to upgrades through clean installs. You still need to configure a test lab
and thoroughly test every step in the process.
One of the best practices when configuring a server is to put all of the data on
a separate partition—or even better, on a separate hard disk—from the OS. If
you have followed this best practice, or if the data for the server is on a storage
area network (SAN) or network attached storage (NAS) device, then you do
not need to restore the data. All you have to do is format the partition where
the OS was located, and install the new OS—the data is not affected by the
upgrade and is still accessible.
Server Uptime
While this is a goal for all network administrators, companies will vary
greatly on how critical it is that you achieve 100 percent uptime. This may
even vary depending on which server you are working on in the company.
If all of the business-critical applications in your company run on a main-
frame, and the small database server that you are upgrading is only used by
five people once a week, then you can probably take that server offline for
a couple of days without seriously affecting any business process. How-
ever, if the server you are taking offline is the primary web server for your
company’s e-commerce site, and you are right in the middle of the busiest
time of year, then taking the server offline for even an hour may cost the
company millions of dollars.
One of the first questions you need to ask when you get ready to upgrade
a server is how much downtime can you afford on this server. Or better yet,
ask the business department that uses the server most heavily what would
happen if the server were not available for 15 minutes, or an hour, or a day?
The reaction you get from the businessperson is usually an excellent indi-
cator of how much network downtime you can afford.
Even though the goal is 100 percent, that will likely never happen. If your
company can hit 99.99 percent (called “four-nines” by the cool network
administrators) or even five-nines, that is extremely impressive. Four-nines
uptime would mean that the server is down for less than 53 minutes per
year. It’s certainly a goal worth shooting for.
2. Install and configure the network services and applications on the new
server. In most cases, these services and applications will be config-
ured to be identical to the old server.
3. Move the data and other resources from the old server to the new
server. Because the old server is still in production at this point, you
may have to use a synchronization tool to make sure that the resources
on both servers are identical and that changes made to the resources on
the old server are reflected on the new server. One example of such a
tool is Robocopy from the Windows 2000 Resource Kit. Robocopy
can be used to copy file resources from one server to another and
maintain the folder structure and the assigned permissions. After
the initial copy, Robocopy can also be used to synchronize the file
resources on both servers.
4. Once the server is stable and configured, test the connectivity to the
resources and applications on the new server.
5. If possible, select a small group of users as a pilot group to begin work-
ing on the new server. This pilot group is used to ensure that all of
the server components are correctly configured. Using a pilot group
on the production environment may not be possible if the server must
have the same name or IP address as the original server.
6. Once you are confident that the server is stable and all of the resources
on the server are accessible, configure all clients to use the new server.
7. If the new server appears to be functioning smoothly, remove the old
server from the network. For the first few days, you may want to just
shut down the server and leave it connected to the network. In this
way, if the new server fails unexpectedly, you can bring the old server
back online very quickly.
Upgrading the operating system is usually not a complicated task. How-
ever, it can get quite complicated if there are applications being used on the
network where the old server NetBIOS name or IP address is hard-coded in
the application so that it is very difficult to change. If this is the case, it is
more difficult to manage the upgrade because the two computers cannot
both be used on the production environment at the same time. Often, rewrit-
ing the application to point to the new server is not feasible, so the new server
will have to have the same name as the old server—but two servers with the
same name cannot exist on the same network. To get around this, you can
build and test the new server on a network that is isolated from the produc-
tion environment. Then when the new server is stable and has all of the latest
information, you can remove the old server from the network and connect
the new server.
system files for the OS will be replaced. In most cases, you must also restart
the server after you have applied the service pack. For these reasons, it is
critical that you plan carefully for the installation of the service pack. The
following steps provide a framework:
1. Obtain the service pack from the OS manufacturer. The service packs
are usually available either on a CD-ROM or on the manufacturer’s
website (see “Service Pack Websites” below).
2. Test the service pack in a test lab. Although the install of a service pack
doesn’t fail very often, it can happen and potentially result in an
unbootable server. You need to test the service pack against your
server’s configuration to make sure that the service pack does not have
unexpected results on your particular combination of software and
hardware.
3. Check the documentation that comes with the service pack release.
Often a service pack release will include documentation that details
known problems for the service pack install. Check the documentation
to ensure that none of the known problems apply to your situation.
4. Learn from other people’s experience. Wait a month or so after the
release of a service pack to make sure that it is stable. During that
month, monitor the manufacturer’s support website and monitor all
the relevant newsgroups for how the install is working in other envi-
ronments. Fortunately, there are enough network administrators who
do install the service packs immediately upon release, so let them do
some of your testing for you.
5. Back up your servers. Again, you want to prepare for disaster. If the
service pack upgrade fails, you must be able to get your server back up
and running as quickly as possible.
6. Schedule the downtime. Clients should not be connecting to your
server during the service pack install and the server restart, so let them
know when the server will be unavailable. Ideally, you should perform
the service pack install during non-working hours to insure minimal
effect on the business users.
7. Install the service pack and document the results. If you run into any
problems during the service pack install, document the problem and
how you fixed it. This can be useful information as you install the
service pack on other servers.
Test Labs
This requirement means that you must have a lab available whenever you
need to do any testing. In a small company, there may be significant resis-
tance to dedicating even a couple of servers to a test lab and you may have
to create the test lab when you need it from older servers or high-end desk-
top computers. Larger companies often have a dedicated lab environment,
but sometimes the difficulty is getting access to the lab.
Develop lab test case procedures: To make the lab time as efficient as
possible, you should go into the lab with a complete lab test case. This
means that before anyone is given access to the lab, they should have
a detailed description of what they are testing in the lab, how they will
be setting up the lab, and what the expected effects of their testing will be.
Develop a way to rebuild the lab efficiently: The point of having a lab is
to be able to test changes to your network without affecting the produc-
tion environment. However, you also need a mechanism to return your
lab back to the current production environment as quickly as possible so
that you can try another test, so someone else can use the lab and not
have to deal with your changes. The most efficient way to rebuild the lab
is to use disk-cloning software so you can build the lab exactly the way
you want it and then take images of all the servers. Then, after someone
has tested their software and made changes to the servers, you can
rebuild the servers back to their original configuration within minutes.
In some cases, you can automate the installation of service packs across
multiple systems. For example, you can use Group Policies in Windows 2000
Active Directory or Z.E.N.works in NetWare to automatically install service
packs on multiple servers in your environment. This can be a great time-
saver if you have hundreds of servers. However, this approach also requires
a high level of testing and disaster recovery planning. If you think having to
recover one server in the event of a disaster is tough, think about what it
would be like to recover hundreds of servers. If you decide to use the auto-
mated deployment tools, then test the deployment thoroughly. Create a pilot
deployment where you will deploy the service pack to a small group of servers.
Even when you are confident that the pilot has gone well, deploy the service
pack to a small selection of servers at one time. The most crucial question
that you need to deal with is disaster recovery time. Never automate this
deployment to more servers at one time than you can recover if the deployment
fails badly.
Software Patches
T he third type of server OS upgrade is a software patch. This upgrade
is less significant than the service pack upgrade because a software patch
usually deals with just one issue. Because a patch usually addresses only one
problem with the OS, it is not uncommon for an OS manufacturer to release
a new patch every few weeks. This means that managing the software
patch upgrades is more of an ongoing management issue than a large-scale
OS upgrade.
In most cases, service packs include all of the software patches that have been
released up to that point. If a software patch does not apply to your environ-
ment, then it is a good practice to not install it until the service pack is released.
This gives more time for people to discover problems with the software
patch. The service pack also contains a variety of patches, all of which have
been tested together rather than individually.
2. Check the documentation that comes with the software patch release.
Again, you are checking for known issues with the software patch
install.
3. Learn from other people’s experience. Monitor the manufacturer’s
support website and monitor relevant newsgroups for how the
software patch install is working in other environments.
4. Test the software patch in the test lab.
5. Back up your servers. Service pack installs seldom fail, but you must
be prepared for the worst-case scenario.
6. Schedule the downtime.
Just as with service packs, you can automate the deployment of software
patches using Active Directory Group Policies, NetWare Z.E.N.works, or
another automated software installation service. The same cautions about
testing, piloting, and managing the deployment apply.
Many of the same issues apply when upgrading the hardware drivers as
those that applied to working with service packs or OS patches. The first
question that you need to ask is whether you need to install all of the latest
drivers as soon as they come out. Just like OS patches, driver updates are
usually designed to address one or more small bugs that have been located in
the drivers. The documentation that is included with the drivers usually
clearly identifies the bugs that are fixed in the upgrade so you can decide
whether the update applies to your situation.
In some cases, you have little choice about installing the latest drivers for a
particular piece of hardware. When you place a support call to the hardware
manufacturer regarding a problem with their hardware, one of the first ques-
tions they ask is if you have installed the latest driver. If you haven’t, they will
often tell you to install the latest driver and then call back if the problem
persists.
If you work in a network environment where there are many different types of
hardware, you will find yourself needing many different hardware drivers. If
you don’t want to have to download the latest drivers every time you need
them, you should develop some way of maintaining a copy of the latest drivers
on your network. Most companies have a central share on a server where they
store all drivers. If you have access to a CD burner, it is also a best practice to
burn a CD-ROM with all of the most popular drivers for your network. Most
drivers are less than 2MB, so you can store many drivers on one CD-ROM.
One of the new features in Windows XP and Windows .NET is the driver roll-
back option. These operating systems maintain a copy of the previous drivers
that were installed on the server. If the new driver is causing problems, you
can choose to roll back to the previous driver.
UPS Upgrades
Another software component that is often installed on your servers is
the UPS software. The UPS software is used to configure the UPS itself. For
example, you might want to configure alerts on the UPS so that you receive
a notification whenever the power goes out, even if the power outage was
just for a short period of time and the servers were not shut down. The UPS
software is also used to gracefully shut down servers in the event of a long
power outage that exhausts the UPS’s battery power.
Upgrading the UPS software is similar to upgrading the monitoring tools.
In some cases the software includes a server component that is installed on one
server to manage the UPS and an agent that is installed on each server that is
protected by the UPS. Upgrading the UPS software sometimes requires that
you remove the old version of the software first and then install the new
software. In most cases, however, the new version of the software can be
installed over the old version and all of the settings will be retained.
Upgrading the UPS can also include replacing the batteries or updating
the firmware. If you are upgrading the battery in the UPS, disconnect it
from the power source, disconnect the terminals from the battery, and then
proceed with the replacement. Once the battery has been replaced, it will
need to be charged, so it is a good idea to have a second UPS on hand to
protect your servers from a power outage.
Summary
I n this chapter, you learned about the various software upgrades that
you might need to apply to the servers on your network. The first type of
upgrade, and the most significant, is the operating system upgrade where
your entire OS is replaced with a newer version. When planning an OS
upgrade, the most important question that you need to answer is whether
you want to perform an in-place upgrade or a clean install upgrade. You
learned some of the reasons why you would choose either option, as well as
the procedures for performing both upgrades.
The second type of software upgrade that you learned about was the ser-
vice pack and software patch upgrades. In this case, you are applying fixes
to the operating system rather than performing a complete upgrade of the OS.
You learned when you should apply the service pack and software patch
upgrades, as well as the procedures for performing the upgrades.
The last major topic covered in this chapter discussed upgrading other
software that is installed on the servers but is not part of the OS. These soft-
ware components include hardware drivers, monitoring and management
tools, and UPS software. While these software upgrades may not directly
affect the OS, they must still be planned and managed carefully.
This chapter completes the major section of this book discussing all of the
different upgrades that you may need to perform on your servers. The next
section of the book focuses on proactive maintenance, or all of the things you
can do to make sure that your network is always available to the clients. The
first chapter in the next section discusses monitoring and management tools
and procedures.
Exam Essentials
Know how to prepare for possible failure when performing any soft-
ware upgrades on your servers. An essential component in any
software upgrade is to do everything you can to insure that the
upgrade will succeed, as well as do everything you can to prepare
for a quick recovery if the upgrade fails.
Know which OS upgrade path is best for a given situation. The clean
install upgrade is the best upgrade option because it has the lowest chance
of failure as well as providing the most stable operating system after the
upgrade.
Understand when you need to install service packs and software updates.
Service packs and software updates are primarily bug and security patches
for the operating system. In most cases, you should install these only if they
affect you or if they are recommended by the OS manufacturer.
Understand the procedure for installing service packs and software
updates. Installing service packs and software updates is easy, however,
you must test the installation to insure that the upgrade works smoothly
and prepare a rollback plan in case the upgrade fails.
Key Terms
B efore you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. Which of the following are advantages to performing an operating
system upgrade as opposed to a clean install?
A. All of the previous configuration settings are lost when you
perform an upgrade.
B. Applications do not have to be reinstalled.
7. Which of the following steps are you least likely to perform before
beginning an upgrade of an operating system?
A. Test for application compatibility.
8. You have performed a full backup of your server and developed a roll-
back plan. Before doing the upgrade in the production environment,
what should you do first?
A. Notify users that the server will be offline.
B. Test the upgrade plan in a test environment.
B. Service fix
C. Service pack
D. Software fix
A. As soon as it is released.
B. When it will fix a known bug.
12. Before proceeding with a service pack update on your server, what
tasks should be performed?
14. IBM has just released a collection of software patches. What type of
patch will you apply?
A. FixPak
B. Service pack
D. Software pack
A. It verifies that the driver will work with your operating system.
16. You are upgrading the server component of your network manage-
ment software. Before proceeding with the upgrade, what tasks should
be completed?
A. Back up the application data.
18. Novell has just released a collection of fixes. What type of patch will
you be applying?
A. Service patch
B. FixPak
C. Consolidated Support Pack
D. Service pack
Monitoring
Before setting up baselines, performance, and hardware monitoring,
you must first understand what monitoring is, why you should monitor, and
what to monitor. Unless you understand the reason behind a task, and the
expected outcome, you are wasting your time. Monitoring is used to watch
system performance, as well as assist with determining the MTBF (mean
time between failures). If we consider the server as one whole component,
the MTBF will be reliant on the proper operation of each individual compo-
nent. Should one component or software application stop responding, then
a failure will be recorded. The importance of keeping track of MTBF
becomes evident over the lifetime of the server. Trends in product life spans
can be determined to best match components within your server environ-
ment. Normally MTBF is used to track useful life period of hardware com-
ponents such as hard drives. Within a server, hard disks are working in a
stressful pace. Multiple client requests and RAID implementation often
will push hard disks to their limits. Unfortunately, each server’s operating
environment and stress loads create unique situations for each hard disk.
A hard disk that operates well in one situation may not in another. Manu-
facturers will often quote MTBF for their hard drives. This information
should be taken in context because your implementation will be different
and therefore your results can and often will be different.
What Is Monitoring?
Monitoring is the process of watching the effects and outcomes of a com-
puter’s actions. This can include hardware, software, and resources. For
example, monitoring resources will allow you to get a clear understanding on
how the computer is using (or not using) resources during its daily operations.
Monitoring is not a one-time task. It should occur on a regular basis and over
a period of time so you can see how the server will perform during all the tasks
that it faces. This will include all the varying stress loads (users, Internet,
extranet, remote access, backups) and situations that the server will face.
Why Monitor?
Through the process of monitoring you are ultimately looking for trends
in server behavior. Is there a time when the server seems to be performing
poorly? What is happening during this time that could be a reason for
this poor performance? What is causing this performance problem? On
the other side of the coin there may be times when the server is performing
extremely well. Documenting the server settings (including resources used
and where) will assist you in creating a baseline. A baseline describes
initial server performance and provides a standard against which you can
compare future performance. Creating a baseline will involve assessing
server performance over a period of time. Based on the varying stress
loads, an expected level of performance, or adjusted baseline, can be
determined.
How to Monitor
Finally a decision must be made on how to monitor. Not only will this
require setting up monitoring times but it will also require deciding on
what monitoring software and resources you will use. There are a multitude
of different monitoring tools available. Some software is included with
Third-Party Monitoring
On the store shelves and the Internet are a multitude of third-party utility
programs that will provide hardware monitoring. Norton Systemworks,
and Norton Utilities, for example, will monitor system performance, as well
as data received from the SMART utility (explained below under the “Hard
Disks” section). Unfortunately, using a third-party program can lead to
other issues such as compatibility with operating systems and installed soft-
ware. Running these programs creates even more stress on the resources that
are being monitored!
What to Monitor
Deciding what to monitor will vary depending on the server’s role within the
network environment. For example, if the server’s primary role is a print
server, then monitoring would include the print service, print queue, as well
as the network connection used to give access to the print server.
Generally monitoring involves carefully watching a few key areas within
the server itself, including processor performance, hard disks, memory usage,
and resource usage. These key elements not only provide a core by which
the server operates but also support software and operations for the entire
network.
Processor Performance
Processor performance is obviously a key area of server operation. If the pro-
cessor is taxed to the point that it cannot run programs, even the operating
system will begin to suffer. This can result in an operating system crash or
software failure. At best it will result in an extremely slow operation. Pro-
cessors for high-performance servers generally are unique in their cache size as
well as general structure. As you remember from Chapter 3, “Motherboards
and Processors,” there are several server-specific processors, such as the Intel
Xeon and Itanium, or the AMD MP processors. Even with these specialty
processors, problems stemming from too much workload can grind the sys-
tem to a stop. Performance monitoring for the processor will assist in watch-
ing for this trend and dealing with it before it becomes a serious problem.
Monitoring a processor is done through software utilities. Manufacturers
for motherboards also provide monitoring tools that will watch mother-
board performance, including processor voltages and fan speeds. Most pro-
cessor monitoring is done through the use of network operating system
utilities (covered later in this chapter).
When I started my last job, I walked into a room full of new Dell computers
and a shiny new Dell server. The hardware (including the network) was com-
pletely brand new and installed. One month into running the new equipment
we realized that the database-style software used to deliver programs to the
desktop computers was literally killing the server. Performance on the client
side would regularly grind to a halt and even freeze. This would result in lost
data and time. After confirming that the problem was not client-side, I turned
my attention server-side. After running some performance tests I realized
that when the database program was started, the processor utilization would
max out at 100 percent. The memory usage would also reach 100 percent.
The server was using all of its power to run the database program, leaving
none to do anything else. Unfortunately this server was also responsible
for the Internet, e-mail, authentication, file sharing, and printing, but when-
ever the need for another of these services arose, the server would run out of
free resources and lock up.
The solution was painfully evident. The server was underpowered. Through
performance monitoring I was able to determine that the server required
more RAM and a secondary processor. This was painful for the company
because this was a new server. Had the people responsible for ordering that
server better understood the software demands, they might have made a
better choice.
SMART Disks
Beyond diagnostic tools, manufacturers are now incorporating logic into
drives that will constantly monitor the disk and act as an early warning
system against disk failure or damage. This tool is called Self-Monitoring
Analysis and Reporting Technology, or SMART. Available on IDE hard
disks, this feature is integrated into the hard disk’s controller and uses sen-
sors to monitor the disk. In order for the feature to work, you must also have
a BIOS that supports the SMART disk feature. SMART evolved from an
IBM initiative called Predictive Failure Analysis (PFA). Both SMART and
PFA are based around the concept that early warning signs of disk failure can
be found and reported before the disk fails. This will give the administrator
enough time to locate a suitable replacement and copy the data from the
failing drive. Unfortunately not all failures are slow progressions that
the SMART or PFA technology will pick up on. A chip failure, for example,
will be a sudden failure that will occur without warning. This would not be
caught by the SMART or PFA utilities.
Memory Monitoring
With the price of RAM at an all-time low, out-of-memory errors and memory
performance issues are happening less and less. Servers are containing more
memory as motherboard manufacturers develop boards that now support
gigabytes of RAM. Memory usage should still be monitored. This will ensure
that as new software is installed the server will continue to perform its tasks
as well as cope with the stress of operating the new software.
RAM errors fall into two categories: hard and soft. Hard errors are per-
manent physical damage to a RAM module (commonly a damaged chip, or
stuck bit that is returning the same value every time). Hard errors, once they
happen, will continue to happen until the RAM is replaced. Soft errors are
more sporadic. In a soft error, a problem will occur and then vanish for a
period of time. As a result, soft errors are more difficult to diagnose, and they
are more common than hard errors. Soft errors result from failing RAM,
ESD, or RAM that is poorly matched to the motherboard. Dealing with
diagnosing RAM issues is a serious matter. If RAM fails, the system will
suddenly come to a halt. Damage to software and operating systems that
were running is a serious concern.
Traditionally RAM was created in a non-parity format. This means that
there is one bit of RAM memory for each bit of data that will be stored in
RAM. Therefore eight bits of RAM will store eight bits of data. Parity RAM
contains an extra bit of RAM that is not used for data storage but rather for
error checking. Parity checking is a rudimentary method of detecting simple,
single-bit errors in a memory system. It in fact has been present in PCs since
the original IBM PC in 1981, and until the early 1990s was used in every PC.
In order for parity checking to work, the BIOS must support parity RAM
and have the feature enabled. A newer method of error checking, called ECC
(error correcting circuits), began with the Pentium class of computers. Parity
checking provided single-bit error detection for the system memory, but did
not handle multi-bit errors and provided no means to correct memory errors.
ECC will detect both single-bit and multi-bit errors, as well as attempt to
correct single-bit errors. Like parity checking, ECC requires a setting in the
BIOS program to be enabled.
Resource Monitoring
Resources within the server are the last key area of monitoring. Resources fall
under the category of IRQ (interrupt request lines), DMA (direct memory
access), and I/O (input/output channels). All of these are general system
resources that are used by every computer. Servers face more stress in resource
management because they implement other network-based programs that
can be, and often are, simultaneously accessed by other computers on the
network. In earlier times of computing, MS-DOS included a program called
MSD (Microsoft System Diagnostics). This simple utility would show which
resources were free and which were in use. At a time when expansion cards
were configured manually through the use of jumpers, this was a valuable
tool. Modern computers and servers have moved away from manual resource
management and ISA cards to BIOS-controlled resources. At times though,
resources are still shared among numerous devices. Being able to monitor this
sharing and potential problems resulting from shared resources is a must.
Figure 12.2 is a screen shot from Windows 2000 Computer Management
showing the Conflicts/Sharing monitor under Hardware Resources. Notice
how you can view the shared resources.
The Hardware Resources monitor also allows you to view the DMA,
IRQ, Forced Hardware, Memory, and I/O information. We will look closer
at the Computer Management feature later in this chapter.
SNMP
S imple Network Management Protocol (SNMP) is a network manage-
ment specification developed by the Internet Engineering Task Force (IETF),
a subsidiary group of the Internet Activities Board (IAB), in the mid 1980s to
CMIP
C ommon Management Information Protocol (CMIP) may be a better
alternative than SNMP for large, complex networks or security-critical net-
works. CMIP is similar to SNMP and was developed to address SNMP’s
problems. However, CMIP takes significantly more system resources than
SNMP, is difficult to program, and is designed to run on the ISO protocol
stack.
The best feature in CMIP is that an agent can perform tasks or trigger
events based upon the value of a variable or a specific condition. For example,
when a computer cannot communicate with the network print server, an
event can be generated to notify the administrator. With SNMP, a notifica-
tion alert would have to be performed by a user, because an SNMP agent does
not analyze information.
Performance Monitoring
O bjective assessment is based on logical analysis and benchmarking.
In this form of assessment, specific tools are used to accurately measure per-
formance of a hardware component. Usually benchmarking is the preferred
means of assessing hardware performance. An example would be taking two
comparable hard disks and installing them in identical computers. Tests
based on software usage and access time can then be gathered and compared.
Since the disks are installed in identical computers, any noticed differences
must be as a result of the different disks because that is the only variable.
There are many different programs that are used to benchmark hardware.
High-Level Benchmarks These are programs that use code from popu-
lar application software such as web browsers and office suite programs.
The idea is to create the stress that standard users would normally create
if they were using the hardware. Hardware performance is then measured
under these common stresses.
Low-Level Benchmarks This type of benchmarking attempts to isolate
the component directly and remove any extraneous interference (such
as the OS). This type of testing is sometimes discounted by vendors
because it does not simulate normal computer use.
Real-World Benchmarks This type of benchmark can be performed
outside the lab, often by enthusiasts rather than engineers. For example,
someone might install several CD-ROM drives in one computer and then
measure how long it takes to install a popular game using each drive
in turn. The performance of each drive would then be compared and
published.
Caveat Emptor
I have seen several computer technicians fall into the trap of purchasing com-
ponents based on subjective performance and opinions. The worst place to
get caught up in these situations is at computer fairs. Manufacturers are at
these trade shows with equipment set up and running. Of course they are
connecting the equipment to the best computer components that money can
buy. The video card they are demonstrating looks fantastic. It will play the
latest games with no slowdowns. However, what are the other factors that
are influencing that performance? What processor and RAM are installed?
Always look for concrete facts on benchmarking of the product. Try to locate
information from unbiased sources. You can bet that the manufacturer’s
literature will not have anything negative to say about their product.
operating systems from Microsoft, Novell, and Unix vendors have provided
performance-monitoring software that is as capable if not better than many
third-party providers.
NetWare
Novell implements several software programs that will assist in monitoring
system performance as well as in locating potential problem areas. Network
management software such as ManageWise and NetWare Management Por-
tal are used to monitor system and network performance. Novell also offers
the Monitor program (which comes free as a part of NetWare). The Monitor
program acts as an interface for the administration of the server from a
remote console.
ManageWise
ManageWise consists of a server component and a console component. (This
follows along with the Novell structure of remotely maintaining and admin-
istering a NetWare server). ManageWise includes several components that
are installed on the server: The server half of ManageWise includes the
following pieces:
NetWare Management Agent 1.6
NetWare LANalyzer Agent 1.0
The server components of LANDesk Virus Protect 2.1
The server components of LANDesk Manager 1.51
The Net Explorer component of NMS 2.0
Once installed, ManageWise provides several different management
services. Server problems are automatically detected, and notification is sent
to the ManageWise console, as well as to any other SNMP management
program that the server is configured to report to. ManageWise also allows
you to monitor and view key areas of server performance including available
cache memory, CPU utilization, and disk usage.
In addition to detecting server issues, ManageWise will also monitor and
detect network problems including analysis right down to the individual
packet level. ManageWise can capture and decode packets, and track what
applications are doing on the network. ManageWise will also monitor net-
work usage and performance. This will allow you to determine how growth
is affecting the network, and avoid possible problems such as those discussed
in the “New Server Upgrade” section above.
ManageWise also protects the network from viruses, on both workstations
and servers. Workstations are automatically scanned for viruses when they
are powered on, and again when they log into the network. ManageWise
also scans every file when it’s opened or closed, and immediately notifies the
network administrator if it detects a virus. Infected files are automatically
quarantined until the network administrator decides what to do with it.
Finally, ManageWise creates an inventory on both hardware and software
for each workstation, and stores it in a database. This database is distributed
across all the servers on the network, each storing the information for their
local workstations. When the administrator, at a ManageWise console, asks
for inventory information on a particular device, the ManageWise console
automatically queries that device, determines where its inventory information
is stored, and retrieves the information. Having information on network hard-
ware and software inventory can help you plan future upgrades.
The console part of ManageWise includes the NetWare Management Sys-
tem 2.0 and the console portions of LANDesk Manager 1.51 and LANDesk
Virus Protect 2.1. All the console pieces are automatically installed by the
ManageWise console install program.
The ManageWise console provides the user interface for all of the services
discussed above. The console is built around a map of the network automat-
ically generated by ManageWise. For instance, to manage a particular work-
station, a ManageWise user simply needs to find the workstation on the map
and then double-click it. This will bring up the Desktop Access window for
that workstation. From there the user will be able to remotely control the
workstation, view its hardware and software inventory data, transfer files to
and from the workstation, and so on, just by clicking the appropriate icon in
the Desktop Access toolbar.
Management Portal would allow you to assess and monitor these servers
through any computer with an Internet connection and browser. Through
this management tool, administrators can perform a multitude of tasks
including:
Modify and view configuration parameters
Load and unload NLMs (NetWare Loadable Modules)
Start and stop server processes
Check the way server memory is used
View and change registry settings
View and clear server connections
Set parameters for network interface cards, drivers, and disks
View the status of certain processes
Manage disk volume information
Compress large files
Change the attributes of volumes and files
Manage the file system
Unix
As mentioned earlier in this book, there are a multitude of Unix flavors. Each
flavor has noticeable differences and tweaks that make it suitable for a
specific application. With Unix operating systems being created with open
source code, third-party software is abundant. Monitoring and performance
assessment software are available from a variety of vendors. We will take a
look at a few of the more common utility programs.
Ntop
The ntop tool shows network usage, and was designed to run on most Unix
based operating systems (including Linux). Ntop software will analyze net-
work data traffic and provide the following functions:
Sort network traffic according to protocols
Show network traffic sorted according to various criteria
Display traffic statistics
Show IP traffic distribution among various protocols
Analyze IP traffic and sort the analysis according to the source/
destination
Display IP traffic subnet matrix
The ntop utility will run on fiber, token ring, and Ethernet networks. It
will also work over various protocols including IP, IPX, DecNet, AppleTalk,
Net BIOS, OSI, and DLC. Figure 12.3 is a screen shot from ntop. Notice the
pie chart comparing the types of network traffic.
FIGURE 12.3 The Unix based ntop utility displays global traffic statistics
HardDrake Project
The HardDrake Project is a software program that makes configuration
of hardware in Linux easier. It provides hardware detection through the
use of a hardware detect library. It is run through a simple GUI and supports
Ethernet and sounds cards. If you have used Linux before, you are well aware
of the difficulty that can arise in installing hardware. The HardDrake Project
helps alleviate this issue. Figure 12.4 is a screen shot from HardDrake.
KHealthCare
KHealthCare is a hardware-monitoring program that was designed for
Linux. It helps to predict possible hardware failures by using hardware mon-
itoring sensors and chips located on most modern motherboards. Most of
the ATX motherboards sold today support direct connection to fans and the
power supply. Through this means the motherboard controls fan RPMs
through voltages. The administrator sets threshold limits on items such as
fan RPMs, temperatures, and voltages. If a threshold is breached, an alert
can be sent to the administrator or KHealthCare can also be programmed to
automatically shut down the server. Figures 12.5, 12.6, and 12.7 are screen
shots from KHealthCare.
Lm-Sensors
The lm-Sensors monitor can monitor the hardware health of a Linux system.
Much like the KHealthCare program, lm-Sensors must be run on a
system that supports hardware monitoring.
Big Brother
Big Brother is a bit different than the other monitoring programs discussed. Big
Brother actually collects information that is broadcast from other systems to
a central location. At the same time Big Brother also polls connected systems
over the network. This process creates a redundant method of monitoring sys-
tem performances over a network. A colorful web-based GUI is used to help
decipher the information coming in. Simplicity was the focus in creating this
GUI: red is bad and green means all is good. Big Brother is supported on Linux,
Unix, and Windows based operating systems. Figure 12.8 is a screen shot from
the Big Brother program.
Many monitoring and performance assessment utilities are available for Unix
based operating systems. For more information go to www.linux.org/apps/
index.html.
Windows
The System Monitor has always been Microsoft’s primary tool for measur-
ing and monitoring system performance. The System Monitor is located in
the Administrative Tools folder under Performance. Figure 12.9 is a screen
shot from Windows 2000.
FIGURE 12.9 Windows 2000 Performance screen highlighting its System Monitor
The System Monitor is based around the use of counters. Counters are the
measurable values that are assessed. These values are selected by clicking
on the add (+) button located at the top of the System Monitor. A list of
counters is then displayed, as seen in Figure 12.10.
If you are unsure of the counter’s significance, then the Explain button
will give you more information on what the counter will actually measure.
The real excitement comes when you add more than one counter. By adding
multiple counters, data can be compared between the counters. Is there a
trend in performance degeneration? Do the interrupts per second influence
the processor usage time? By collecting and analyzing data from multiple
counters at one time, you can establish trends in performance and system
behavior. As you see in Figure 12.11, three different counters are running
at one time.
FIGURE 12.11 Windows 2000 System Monitor running three different counters
Performance data does not have to be analyzed in real time. Logs can be
generated over a period of time and then analyzed at a later time. This will
help you learn the trends of the network and resource traffic that your server
is facing on a daily basis. It becomes clear that there are specific trends in
behavior of the clients that access the server. If your server is providing mul-
tiple services rather than being dedicated to one task, then you can determine
the performance for each task and make a careful decision on the need for
upgrading. For example it is common to find that the server is hit hard
for e-mail requests first thing in the morning as everyone arrives at work and
checks their email. Later in the day that stress load may switch over to print
requests or database requests. How are these stresses handled by the server?
Log files can become extremely large. Make sure that you have plenty of hard
disk space to accommodate the file. It is advisable to run the log file for a brief
period (such as 10 minutes). Stop the log, and check its size. This number can
then be used to calculate the probable total size of your log file based on the
length of time that you were hoping to collect data for.
When collecting data you need to be conscious of the fact that your mon-
itoring activities are actually increasing the load. Depending on the server’s
power (RAM and processor specifically), your results may be affected
significantly.
Remote Notification
Because most servers are secured in a safe environment, they often go
for extended periods of time without being checked on. Many servers also
have the keyboard, monitor, and mouse removed to prevent unwanted
access or tampering. With this limited contact, servers need a means of estab-
lishing communication with the network administrator should a problem
arise. Throughout this chapter we have identified and explored various
means of monitoring and identifying problems related to both hardware and
performance. Identification of these potential problems is no good if the
information is not received by the administrator in time to take proactive
measures. This is where remote notification comes in. Whether it is notifica-
tion of a potential virus, power loss and UPS takeover, hardware problems,
or performance issues, remote notification is an important component of
server operations. Even in an environment where there are limited numbers
of users and only one server operating, remote notification will ensure that
you know as soon as possible that there is a concern with the server. Unfor-
tunately this then places you on a 24-hour standby with the server. The
benefit of a remote notification is that if something like a fan fails on a Friday
night, you do not have to worry that the server was operating without the fan
and potentially overheating throughout the entire weekend. If urgent
enough, then you can deal with the issue immediately.
Remote notification can take on many different forms. E-mail alerts are a
common means of notification that something is not right, but are effective
only if you are constantly connected to the e-mail. Remote pager alerts or
phone calls are another option. Pagers fit nicely on your belt or pocket and
can go with you anywhere.
Summary
T his chapter began with an exploration of hardware monitoring.
Monitoring is a key responsibility of the server administrator. Monitoring
involves watching system performance in hope of catching potential failures
before they become a serious problem. Monitoring is also used to extend
the Mean Time Between Failures (MTBF). Monitoring usually involves
watching system resources, processors, RAM, and hard disk performance.
Monitoring can also be used to watch application performance.
Memory monitoring over time will tell you whether the server’s RAM
remains adequate for changing conditions and will also alert you to errors.
Parity is an extra data bit that is used for error checking. Newer forms
of RAM use ECC (error correction code) to monitor errors in RAM. ECC
RAM will also attempt to recover from memory errors. ECC also supports
checking multiple bits rather then just single-bit errors.
Hard disks contain many moving parts, and due to their stressful life,
are commonly the devices that will fail within a server. When a hard disk
is spun up and then powered down, it is called its start/stop cycle. Most
hard disks are guaranteed to live through 30,000 to 50,000 start/stop
cycles. Most hard disk manufacturers provide special diagnostic software
to assess their hard disks’ performance and current operability. These util-
ities can be invaluable in monitoring and assessing your hard disk.
SMART is a new feature that uses sensors combined with your BIOS and
operating system to monitor hard disk performance. SMART technology
evolved from IBM’s Predictive Failure Analysis technology.
Exam Essentials
Know the benefits of monitoring. Monitoring is the process of watch-
ing system resources and hardware to maintain an established baseline
of performance.
Know what a baseline is. Baselines are set standards of expected
performance.
Know the key areas that are monitored. This include processor, RAM,
resources (IRQ, DMA, I/O) and hard disks.
Know the types of memory error control. Parity is an extra bit used to
check for RAM errors; ECC is error correction code, which can also
attempt to recover from memory errors.
Know what SMART hard disks are. SMART (Self-Monitoring Analysis
and Reporting Technology) monitors hard disks through motherboard
circuits.
Be familiar with MSD. MSD (Microsoft System Diagnostics) is soft-
ware created in the days of DOS that allowed you to assess and view
system resources.
Know what SNMP is. Simple Network Management Protocol is part
of the TCP/IP suite and is used to monitor network performance.
Know what CMIP is. Common Management Information Protocol is
an improvement over SNMP that runs with less overhead.
Be familiar with the different forms of benchmarking. These include
high-level (using code from popular applications), low-level (testing
isolated components), and real-world (using common everyday situations
to test performance).
Know the performance utilities for the three main NOSs. NetWare uses
ManageWise, NetWare Management Portal, and Monitor. Unix/Linux
uses a variety of third-party utilities including ntop, HardDrake Project,
KHealthCare, lm-Sensors-Source, and Big Brother. Windows has relied
on System Monitor but does support third-party software such as
Hmonitor.
Know the benefits of remote notification. Alerting administrators by
e-mail, pager, or network notification will allow for proactive approaches
to dealing with potential problems before they become serious network
failures.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. What is monitoring used for?
A. Hardware
B. Software
C. Resources
D. Peripherals
C. Regularly
D. On a new server only
D. Conflicts
6. What is a baseline?
B. Hard drives
C. RAM
D. Expansion card bus speeds
C. SD and EDO
9. What is parity?
11. In the context of hard disk technology, what is the landing zone
used for?
A. It is a location on the hard disk for the read/write head to rest on.
12. What is the range of start/stop cycles that a hard disk is expected to
survive?
A. 20,000 to 30,000
B. 30,000 to 50,000
C. 50,000 to 60,000
D. 20,000 to 50,000
A. IRQ
B. DMA
C. I/O
D. ECC
B. AppleTalk
C. NetBEUI
D. TCP/IP
C. System Monitor
D. ManageWise
8. A. Two RAM fault categories are hard errors and soft errors.
10. C. Although there are several possible names for ECC, the only
possible answer from among the options is C.
11. A. The landing zone is a position on the hard disk where no data is
saved and the head is allowed to rest.
12. B. Normally a hard disk will support 30,000 to 50,000 start/stop
cycles in its lifetime.
13. C. SMART, or Self-Monitoring Analysis and Reporting Technology,
is a hard disk performance monitoring tool used with IDE drives.
14. B. PFA, or Predictive Failure Analysis, is an IBM initiative designed
to monitor hard disk performance.
15. A, B, C. IRQ, DMA, and I/O are the three resources that are
commonly monitored.
Wheels?
The most ridiculous adaptation that I have seen is a server on wheels. Many
servers were built with wheels on the bottom. I guess the idea was that
when you needed to move the server you could roll it out for maintenance
or other tasks. However, could you make it any easier for thieves? They
could just roll the server out the door!
Another way to secure a larger tower server is to bolt it to the floor. Many
of the larger servers do have provision for bolting. However, when you need
to perform server maintenance, will you be required to move the server?
This method may result in a lot of work at a later date.
A dedicated server room is probably the best option to use with a tower
server. Remember that if your business expands and your network requires
a cluster of servers, you will have more than one server to protect. Having a
dedicated server room will allow you to install and place the servers to your
liking within the room. Physical security is then left primarily to the quality
of the lock on the door and to who has keys to it.
Don’t be tempted to set the server up in a closet. I have worked at busi-
nesses where they tucked the server away in a closet or under a cabinet in the
staff kitchen. Not only are these locations insecure, they also do not provide
proper ventilation.
Remote Access
D irect physical contact is not the only concern when dealing with serv-
ers today. With the ever-growing number of VPN and WAN connections
being used, remote access to the server is a serious area of threat. High-speed
Internet connections can also provide a door in for remote connections as
well as unauthorized access. Careful planning and addressing of security
issues when dealing with remote access is a must.
duration of connection) and file and folder access; you can also set the OS to
log each connection. If your business has a large number of personnel that
are remotely connecting to the server, implementing these remote access fea-
tures would be a valuable security measure. Authentication at this point is
also critical. Each user logging in to the server remotely should have an indi-
vidual password and user account.
Passwords
Passwords are a strange thing. You were always told when growing up to
pick something that is easy to remember. In computer passwords, choosing
something that is “easy to remember” is the last thing in the world that you
want to do. Personal names or memorable names do not provide for a secure
password. Secure passwords are based on the following characteristics:
A combination of alphanumeric characters is used, including mixed-
case letters and symbols (e.g., Iron$Steel67).
Passwords are frequently changed.
Password lists are maintained by the server, thus preventing people
from selecting the same password twice in a row.
A strict policy on password secrecy is enforced. There should be no
shared passwords.
A minimum length of eight characters for a password is required.
Enforce account lockout after three unsuccessful password attempts.
Remote Threats
T he Internet poses a tremendous threat to server security. If you
consider the fact that a hacker can sit in the safety of his or her home and
continually attempt to break into a server until successful, this is probably
the biggest threat to the server security.
In its essence the Internet is nothing more than an immense wide area
network. Connectivity is through the TCP/IP protocol suite that opens doors
to a multitude of potential security holes. Many of the local area threats are
identical to Internet threats so there is some comfort in knowing that
securing the network against Internet threats will also provide security
against local threats also. In order to effectively deal with the potential for
attacks through a network (whether local or the Internet) you must first
understand the types of attacks and how they work.
Types of Attacks
Attacks on networks are done for several purposes. Attackers can be moti-
vated to seek out information from your server. Other times they are looking
for the gratification of knowing that they can hack into a computer. At times
they are motivated by the knowledge that they are causing harm to someone
else. The nature of the attacker and his/her motivation can vary, but the
strategies used are similar. Attacks often take on the form of IP spoofing,
PING of death, WinNuke, and SYN flood.
IP Spoofing
IP spoofing involves sending packets of information with a fake source
address through the network or Internet. The server is led to believe that the
packets are coming from within the network. Using this method hackers
can trick the server into thinking that the hacker’s computer is part of the
internal network, and thus gain access. The use of a firewall can help in
preventing this form of attack.
Ping of Death
The PING of death is a denial of service attack (DoS). A DoS attack prevents
any users (including legitimate ones) from using the network. The PING util-
ity is part of the TCP/IP protocol suite and is primarily used to confirm con-
nectivity through sending and receiving responses from another network
device. It is a type of echo test. The PING of death involves sending very large
packets rapidly, which results in the receiving computer’s buffer overflow-
ing. The result is that the computer will stop responding, or reboot. Patches
available from operating system manufacturers that will assist in preventing
the PING of death.
WinNuke
WinNuke is a hacking method that is only usable against a Windows-based
operating system. If your network is using Novell NetWare, or Unix, you are
not susceptible to this form of attack. WinNuke sends special TCP/IP pack-
ets with an invalid TCP header. Windows-based operating systems do not
know how to deal with these headers, and the packets will crash the operat-
ing system. Microsoft refers to these crashes as “Out-of-Band data.” What
results is the blue screen of death, a blue screen with a Windows error. The
computer becomes unresponsive and will require you to shut the power com-
pletely off and then restart the computer (what is known as a hard shut-
down). Because Windows was not shut down properly, you will also have to
run a scandisk and most likely repair fragmented files that were open when
the crash occurred. Hopefully they are all recoverable.
SYN Flood
SYN floods are another kind of DoS attack. In normal TCP/IP communica-
tion the session is started with a packet containing a SYN flag. This SYN
flag requests that the receiving computer reply to confirm that it is ready to
start communication. A SYN flood attack involves flooding the receiving
computer with a multitude of meaningless packets all containing the SYN
flag. The receiving computer attempts to respond to all the requests and
in doing so consumes all of its resources. The result is an overburdened
computer that can’t respond to any more requests, even legitimate ones.
Network operating system manufacturers offer patches to help prevent this
type of attack.
Firewalls
Whenever you connect a private network to the public network (Internet),
you are opening doors for potential threats. In today’s ever-advancing
world, remote users and remote connections have become a staple of net-
work activity. Virtual private networks, corporate wide area networks,
remote access, and always on Internet connections provide a multitude of
Firewall Technology
Firewalls can use several different technologies to limit packet and infor-
mation flow. Some of the more common means are access control lists,
dynamic packet filtering, protocol switching, and DMZs (demilitarized
zones).
Demilitarized Zone
A demilitarized zone (DMZ) is a dedicated section of your network that is
neither public nor private. It sits between these two areas. Since outside
users will access specific servers, such as web and e-mail servers, they are
placed in this special zone. The idea is that hackers will also go after these
specific servers. By placing them in a segregated area, away from the core
of your local area network, you take away the risks. Normally a DMZ fire-
wall computer has three network cards. One goes to the Internet connec-
tion, one to the DMZ network computers, and the third goes to your
private LAN. Figure 13.1 is an example of a DMZ. The local network
connects to the firewall via the network switch. The DMZ also connects
through a switch to the firewall. The firewall then connects to the Internet.
This prevents direct connection to the Internet by either the DMZ or the
local area network.
FIGURE 13.1 This configuration prevents direct Internet connection by either the DMZ
or LAN.
DMZ
To Public Internet
Firewall
Private Network
Critical servers such as data servers should not be kept in the DMZ, but
rather in the private network. Web, FTP, and e-mail servers are normally
kept in the DMZ.
Protocol Switching
Protocol switching is another means of protecting your network from out-
side influences. The TCP/IP protocol suite, as you know, drives the Internet.
Most local networks also use TCP/IP. As you learned in Chapter 7, TCP/IP
is the protocol of choice for Windows 2000, Unix, and NetWare 5 and 6.
With many of the network attacks relying on the TCP/IP protocol, this
creates an easy avenue for an attack to occur. Protocol switching involves
two possibilities:
Use a different protocol on the local area network to prevent TCP/IP
based attacks from being effective.
Create what is called a dead zone between your local area network
and the Internet. This dead zone will contain a different protocol.
Using this approach you can still maintain the TCP/IP protocol on
your local area network. Figure 13.2 is an example of using dead zone
protocol switching.
In implementing a dead zone you will have to use two devices, such as
routers, to perform the protocol transitions. This can prove to be a costly
endeavor because routers are expensive.
Proxy Servers
Proxy servers are in common use today. They act on behalf of an entire net-
work. They are used to mask IP addresses of each of the internal computers
that are connecting to the public network. Since some hacking software
attempts to identify the IP address of computers on the local area network
(which can then be used to attempt to gain internal access), the proxy is a
good means of protecting private networks.
When a proxy server receives a request, it will first dissect the packet, ana-
lyze it, and reassemble it. The request is then sent out onto the Internet. The
same dissection process is done when the information is received from the
Internet. Proxy servers often offer a variety of added features such as web
restrictions, virus monitoring, and tracking features.
There are many different types of proxy servers available. The following
is a list of the more common types of proxies:
IP Proxy will hide the IP address of all stations on the local area
network. As requests are made to the public network, the proxy will
exchange its own IP address for that of the requesting computer. This
makes all requests appear as though they are coming from the one IP
address (the proxy).
Web Proxy will handle all HTTP requests. Much like the IP proxy
this form of proxy will send all web-based requests through its own IP
address. Web proxies are also used to filter out HTTP requests as well as
files coming in. If you do not want your users to be able to download
music files from WinMX, then that can be blocked through the web
proxy. Another great feature of web proxies is their caching ability. When
a request is granted on a web proxy, a copy of the information is kept
for a period of time in a cache file. If another request is made for the same
file, rather than going onto the Internet to retrieve it, the proxy will check
its cache file. This will dramatically speed up the retrieval of files.
FTP Proxy is used to upload and download files between a server and a
workstation. FTP proxies offer the same filtering and protection as web
and IP proxies.
SMTP Proxy is used to handle e-mail requests. SMTP proxies will
dissect and examine incoming and outgoing email for viruses as well
as content deemed insecure.
Unix
The majority of the Internet is driven by the Unix operating system. Several
flavors of Unix have been created to address the Internet and firewall
technology directly. Many firewall products were created with Unix techno-
logy. Unix firewall protection can even be configured so that it is the only
service running on the server. This is an added deterrent to hackers as there
is nothing to see, steal, or damage on the server if the proxy is the only
thing running.
Unix-based firewalls also offer the greatest flexibility in implementation,
supporting over 32 network cards. This allows connectivity to numerous
network segments and can control information flow between network
segments on the private side as well as the public side. NetWare supports
16 network cards while Windows is limited to 4.
In the past a definite downside to the Unix structure was the command
line interface. It took people back to the days of DOS (having to remember
commands that were typed in manually). Now Unix supports X Window,
which will give you a GUI interface with mouse support to interface with the
OS and firewall.
NetWare
NetWare uses BorderManager as its firewall software. BorderManager uses
a NetWare administrator snap-in, which allows it to be managed through
NetWare’s Administrator utility. BorderManager offers superb client com-
patibility with support for Windows 95/98, NT, DOS, OS/2, and Mac OS.
Combined with the NetWare OS, BorderManager is considered to be one of
the best firewall protection systems available.
Windows NT/2000
With the security holes present in Windows operating systems, there has
been some hesitation in using products such as Windows NT Server as a
NOS for larger networks. With patches to fix attacks such as WinNuke,
many third-party firewall programs have been written. These third-party
programs run with NT domain security or with the new Active Directory
system in Windows 2000. A major benefit of Windows-based firewall soft-
ware is that it is managed through the familiar Windows-based interface.
Part of Microsoft’s server operating system is Microsoft Proxy, which
provides proxy and firewall services in an easy to set up and maintain user
interface.
Detecting Intrusions
It is great to install a firewall and proxy server, but if you are not monitoring
what is going on, then the added protection is nullified. Monitoring will
ensure that any holes that form in your defense, and any attempts to break
in, can be dealt with accordingly. Intrusion detection will take on three
forms: active, passive, and proactive.
Active Detection
Active detection uses system monitoring to search for hackers or suspicious
activity. Some advanced active detection software will also shut down ses-
sions that appear to be suspicious. Some active detection products available
include Cisco’s NetRanger, Memco’s SessionWall, and SATAN.
Passive detection
Passive detection involves using devices that will monitor network or server
activity but nothing more. These forms of detection will use log files to store
attempts to break into the network but not take any action. Passive detection
will require frequent visitation by network administrators to check the log
files for suspicious activity.
Proactive detection
Proactive defense involves analysis of your network to determine possible
openings for threats before they can occur. This is commonly done through
careful research and planning. The network administrator must be con-
stantly researching and preparing for all known plans of attack. Programs
such as SATAN allow you to scan your network and assist in identifying
security holes. These can then be fixed before the hackers can find them.
Temperature
Temperatures in the environment around your server should be kept at
70 degrees Fahrenheit. Computer components that run in a hot environment
will not last as long as components that run in a cooler one. However,
extreme cold will also have negative results on performance. Remember that
many of the mechanical components (hard disks, fans, optical drives) use
a lubricant on their moving parts. Extreme cold temperatures can cause
the lubricants to become too viscous. This will result in added stress to the
motors and gears that turn these components.
ESD
As I have been sitting here writing this chapter, I can honestly say that you
do not have to be working within your computer to see the consequences
of ESD. I recently got up from the chair to stretch. When I sat back down and
touched the mouse, I felt a shock and the computer rebooted. ESD traveled
from my hand through the mouse to the computer and caused a hard shut-
down. Lucky for me I saved my work before the shock occurred!
Humidity
Humidity poses threats similar to those posed by temperature extremes.
Computers prefer 50 percent relative humidity. More-humid environments
Power Issues
P ower issues include concerns with electricity coming in from the
wall to the server, as well as power control within the server itself. Electrical
problems are often the result of low electrical power, or large bursts of
electrical power. Both extremes of electricity are potentially harmful for an
operating server.
Low electrical supply is often called a brownout or sag. This is a dip in
electrical current. You may have experienced a brownout, or sag, in your
home when someone turned on a high-power-consumption item such as a
vacuum cleaner. The lights within the room would dim. The effects of a
brownout or sag on a server include stress on operating components such as
cooling fans and hard disk motors. Most brownouts are temporary and after
a few moments the power will increase back to normal levels. At times,
though, brownouts may be the result of an overloaded circuit. If at all pos-
sible, a server should be on a dedicated electrical circuit directly from the
electrical panel. This will ensure that brownouts originating within your
building can be avoided.
A complete loss of electrical power is called a blackout. In the case of a
blackout there really is nothing you can do. The electrical power is lost until
the source of the problem can be located and repaired.
Power spikes and power surges are the opposite of a brownout. A power
spike is a brief intense gain in electrical current. You can also experience
power spikes in your home. During the early evenings (from 5 P.M. till 7 P.M.)
you might find that the lights in your home can at times brighten for a second
or two. This is a power spike. It is caused by the multitudes of people who
Surge Protectors
A surge protector is a must for every server. Realistically a surge protector
should be used to protect every electronic device in your office or home.
These simple devices contain a special electronic circuit that monitors the
incoming voltage level and trips a circuit breaker when the overvoltage
reaches a certain level. The level is called the overvoltage threshold. Care
must be taken when selecting your surge protector because many have a
threshold that is set too high. Always purchase a surge protector that is
specifically created for computer use.
Line Conditioners
Line conditioners provide better protection than surge protectors when deal-
ing with surges and spikes. Line conditioners use several electrical circuits to
clean the electrical signals that are coming in. They are effective against
power spikes, surges, and brownouts. A UPS is an example of a line condi-
tioner. Issues with “dirty power” can also be rectified with line conditioners.
Dirty power contains signals that are from other devices, such as fluorescent
lighting, that have strayed into the electrical line.
RFI
RFI (radio frequency interference) is caused by interference between server
operation and radio signals. This can be a result of a nearby radio or televi-
sion, cellular device, or two-way radios. Using high-grade shielded cables
will assist in eliminating problems resulting from RFI. If you are using
external SCSI hard drives, you should definitely look at spending the extra
money to purchase shielded external SCSI cables. This will protect data
being saved to the external drives. Another solution to combat RFI is to
use fiber optic cabling. Fiber cable is completely immune to RFI as well as
EMI (see the following section).
EMI
EMI (electromagnetic interference) is caused by interference from magnetic
fields. The result can be a temporary problem or, if the computer is left
within the field for an extended period of time, a permanent one. Sources of
EMI include electrical motors, transformers, electrical panels, and heaters.
Key components within your server that can be affected include hard disks,
floppy disks, and network data traffic.
EMI Influence
I have seen the effect of EMI firsthand. I once had a client bring me his com-
puter tower after the hard disk became corrupted. After formatting and
rebuilding the operating system and data on the hard disk, I returned the
computer to him. Oddly enough, it was only a month later that he returned
with the same problem. Once again I reinstalled his OS and programs. This
time I assumed that it was user error so I spent some time with him in using
his computer, including properly shutting down Windows to prevent OS
corruption. However, like the song, The Cat Came Back, he returned approx-
imately one month later with the same problems. At this point we were both
getting annoyed. I rebuilt the computer yet again. This time I delivered the
repaired computer to his home. He was not there but his wife asked me to
set it back up for him. She then proceeded to guide me to the basement
where he had his computer desk right under the electrical panel. He was
plugging his computer right into an outlet directly on the electrical panel.
EMI from the electrical panel was causing data corruption on his hard disk.
Eventually, within a month, the corruption had been spreading to the point
where it damaged his operating system.
UPS/SPS
A UPS (uninterruptible power supply) will provide protection against
brownouts, blackouts, spikes, and power sags. A UPS consists of a battery
and line conditioner. Electricity from the wall enters the UPS and is filtered
through the line conditioner. From there the power charges the battery. The
server runs off of the battery. If the electrical power fails, then the server con-
tinues to run off the battery until power is restored or the battery is depleted.
Software is used in combination with the UPS to alert users, as well as the
administrator, that the server is running on battery power. Software can also
gracefully shut the server down. This will include exiting all programs that
are running, as well as the operating system, before the battery is depleted.
An SPS (standby power supply) is similar to a UPS except that it uses a
switching circuit to switch between AC (alternating current) and DC (the
battery system). If a power drop occurs, the circuit will switch the server to
run on battery power. The major problem with an SPS is that if the circuit
does not react fast enough, power to the server can be lost and the server will
experience a hard shutdown.
A UPS is preferred to the SPS because there is less chance of the server
facing an abrupt shutdown if it is on a UPS.
Summary
The chapter began with the physical aspects of server security. This
includes ensuring that the server is safe in its location. With a rack-mounted
server, this is an easy job because everything bolts into the rack, which
should then be bolted to the floor. Rack-mounted servers can also have
a locking door on the rack, which will limit physical contact with the server
and other components in the rack.
Tower servers are more difficult to secure, because they are bulkier than
rack-mounted servers. Provision can be made to mount these servers within
a rack system or bolt them directly to the floor. The best solution is creating a
dedicated server room. This room should not have any windows (to prevent
break-ins) and should include security measures to limit physical access, such
as a strong door and lock.
Limiting server access covers several advanced measures that can be
incorporated to ensure that only authorized personal are entering the server
room and accessing the server. These methods include biometrics, swipe
cards, smart cards, keypads, and video recording equipment.
Remote access is becoming a staple in most server operations. Remote
users connect to access data files and e-mail, and to perform tasks. Securing
remote access to servers involves using secure passwords that are frequently
changed, and enforcing a strict policy on password and network use.
Remote threats include IP spoofing, PING of death, WinNuke, and SYN
flood. Each of these attacks can occur through the private network or come
in from the Internet (or public network). IP spoofing involves sending pack-
ets of information to the server with a fake source address in hope of tricking
the server into believing that the hacker’s computer is part of the private net-
work. The PING of death is a type of denial of service attack. The idea is to
overload the server with packets in hope of forcing the server to crash, or at
least stop responding. WinNuke attacks Microsoft operating systems. It uses
the operating system’s inability to effectively deal with invalid TCP/IP packet
headers. The result is a blue screen of death and an operating system that
stops responding. SYN flood is another form of denial of service attack. It
involves flooding the server with TCP/IP packets requesting a reply. The
server will become overrun with these tasks and unable to process private
network requests.
Firewalls protect private networks from public traffic. Firewalls can be
software, hardware, or both. Firewall technology use access control lists,
demilitarized zones, protocol switching, and dynamic packet filtering to
perform their job. Access control lists are a set of rules regarding network
traffic. A demilitarized zone is a dedicated section of the network that is
available to the private and public networks; located in these zones are web,
FTP, and e-mail servers. Protocol switching uses a dead zone to switch to
another protocol in hopes of creating a barrier that hackers cannot pene-
trate. Dynamic packet filtering happens when a firewall identifies suspicious
packets; this is done through a dynamic state table that is constantly updated
with new connection sessions that are occurring in the private network.
Proxy servers are another line of defense in securing your network. A
proxy server submits requests on behalf of the private network. Common
forms of proxy servers include web, IP, FTP, and SMTP.
Operating system firewalls include proprietary and third-party software
that is used as part of the network operating system. Unix supports many
third-party proxy services and is the driving force behind many web servers.
Novell offers BorderManager as well as third-party proxy support. Windows
servers are often considered the weakest of the operating systems in terms of
security breaches, but Microsoft does offer Proxy, which is an easy-to-use and
easily set up proxy service.
Black box firewalls take the strain of the firewall service off of the server
and into a dedicated separate device. Most black boxes are configured with
a Unix operating system and are self running (once configured).
Intrusion detection is based around three areas: active, passive, and pro-
active detection. Active detection involves software that constantly scans the
server and network for specious activity. If found, an alert can be sent, or the
software can even shut down the communication session. Passive detection
will only record suspicious activity in a log file. It is then up to the adminis-
trator to locate and repair these security holes. Proactive detection involves
analyzing your network and locating the security breaches before they are
discovered by hackers. Once located, the problems can be fixed or secured
before hackers attempt to break in.
Temperature and humidity are controllable environmental conditions.
Ideally temperature should be kept at 70 degrees Fahrenheit. If the temper-
ature is too high, the server will not be able to cool the components properly
and the MTBF will drop. If the temperature is too cool, the chances of ESD
rise and moving components can become stiff, putting extra stress on the
motors that operate them. Humidity can also cause harm within a server.
High levels of humidity can lead to condensation on components, resulting
in a short circuit. Low levels of humidity will increase the chances of ESD
damage. The ideal humidity is 50 percent.
Power issues include low voltage problems (brownouts and failures) as
well as overvoltage problems (spikes and surges). Both undervoltage and
overvoltage problems can be addressed through the use of surge protectors,
line conditioners, and UPSs (uninterruptible power supplies). Other impact-
ing factors include RFI (radio frequency interference) and EMI (electro-
magnetic interference). Both can be problematic in data safety as well as
component failure if exposure is extended.
Ensuring adequate server power involves the use of a UPS or SPS. A UPS
is preferred over an SPS (standby power supply) because the UPS is more
effective at preventing a hard shutdown resulting from a sudden power loss.
Exam Essentials
Be able to plan for a secure server environment. This includes direct
contact security (hardware and software) as well as remote access threats.
Know the methods to limit access to the server. Be familiar with secu-
rity measures such as biometrics, swipe cards, smart cards, and keypads.
Know the elements that make for a strong password. This includes
incorporation of both alpha and numeric characters, frequent password
changing, setting minimum password lengths, enforcing account lockouts
on failed attempts, and establishing a strict password policy.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. Which form of server is generally easier to secure?
A. Rack serves
B. Tower servers
C. Desktop servers
D. Component servers
2. Besides the server, which of the following items should be kept in the
rack? (Select all that apply.)
A. Monitor
B. Keyboard
C. UPS
D. Mouse
A. Yes
B. No
D. Depends on manufacturer
D. Fingerprint scanning
A. A sign-in sheet
B. Warning signs
D. Firewall software
D. A user sending files to the server though the Internet e-mail system.
B. Betty
C. 53995
D. Porsche
9. What is IP spoofing?
A. Sending fake packets in order to trick the server into believing that
you are inside the network
B. Sending multiple IP packets in an attempt to overrun the server
A. Sending fake packets in order to trick the server into believing that
you are inside the network
B. Sending multiple IP packets in an attempt to overrun the server
C. Requesting multiple replies from the server in an attempt to over-
run the server’s buffers
D. Sending special TCP/IP packets with invalid headers in an attempt
to crash the server’s operating system
A. Sending fake packets in order to trick the server into believing that
you are inside the network
B. Sending multiple IP packets in an attempt to overrun the server
A. Sending fake packets in order to trick the server into believing that
you are inside the network
B. Sending multiple IP packets in an attempt to overrun the server
A. Directory of service
B. Denial of service
C. Disk operating system
D. Drive of system
D. RAM
B. Passive
C. Proactive
D. Black box
Backup Defined
A backup is nothing more than a duplicate of all files and folders on
a hard drive. This duplicate would be needed in case of damage or unwanted
change to the original. The key is unwanted change. Businesses often need to
restore data from a backup after an unwanted change has occurred to the
running copy. For example, a corrupted database can be corrected by restor-
ing from the backup copy. Consideration must be taken before doing this
restoration since the time that elapsed from the last backup to the point
of restoration translates to lost data.
There are many different backup devices and programs available, but
generally they all perform the same tasks in roughly the same way: The
backup software is installed and configured to run with the backup device.
The software is then pointed to the drive on which the data to be backed up
is located, the destination of the data is selected, and finally the backup is set
to start. Before getting to this stage though, you will have to decide which
backup device best serves your needs, so let’s begin there.
Backing up vs. archiving: These two terms are often confused. Making a
backup means creating another copy of your data, usually on a removable
storage device that is kept in a safe place. Archiving means preparing data for
long-term storage. For example, you can compress and archive unused sec-
tions of a database. Archiving files does not copy them over to a removable
storage device for safety; backing them up does.
Backup Devices
I n the earlier days of personal computers, backups were done to
one medium only—the floppy drive. Operating systems included a simple
backup program for backing up data from a hard drive onto floppy disks.
When hard drives held only 5 or 10 megabytes, this was an adequate solu-
tion. This method was cheap (requiring you to purchase only floppy disks)
and simple to run. Once the data was backed up, the floppies could be stored
in a safe location. As technology advanced, backing up to floppies became
impossible. Remember that a floppy disk can hold no more than 2.88MB.
New applications and operating systems resulted in file sizes that made the
floppy disk an unsuitable method of backing up data.
Today there are a multitude of different devices available to back up data.
Magnetic tape, DAT (digital audio tape), DLT (digital linear tape), optical
disks, and removable hard disks are all currently used. What is paramount
is that the data being backed up can be removed. The RAID configuration
discussed in Chapter 5, “Fault Tolerance and Redundancy,” is an important
means of maintaining data availability when the server is running, but if
there is a complete system failure (resulting from fire, flood, or other natural
disaster, or from malice), then the information on those hard drives becomes
inaccessible. A backup device lets you copy your data and then take the copy
offsite for security.
Gaining popularity is the optical drive. Optical drives use a laser to
read (and write) to compact disks. Optical disks are made out of a thin film
compressed between layers of plastic. Unlike magnetic media, there is no
contact between the drive and the disk in optical devices. The laser reads
information from the disk without coming into contact with the media. This
not only limits drive wear, but also improves the longevity of the optical
media. Provided that the optical media is not scratched or physically broken,
it will last indefinitely. There are currently two main forms of optical drives
in use today, CD-RW and recordable DVD.
The CD-RW (CD-Rewritable) is becoming the device of choice for small
networks. With the price of optical devices and media dropping, CD-RWs
provide an affordable way to back up data quickly and securely. Another
real benefit of this method is the long-term life span of the CD. Once the data
has been backed up to the disc, the disc remains virtually indestructible, as
opposed to data on magnetic tape, which can be damaged by magnetic dis-
turbances. Although media sizes are increasing, CD technology is currently
limited to less than a gigabyte on one CD.
Recently a recordable DVD drive has been released. Several different
variations on disk capacity and drive technology with recordable DVD
are currently in development and initial release. Once standardized, this
technology will offer optical disk backups with gigabytes of storage on one
disk. One DVD standard that is becoming a leader in future DVD standards
offers 8 gigabytes of storage on one DVD media disk. Compared to the
1.44MB of available space on a standard floppy disk, it is clear to see how
the DVD technology is of great interest for many users.
Removable hard disk drives are another option for backing up data,
but the costs outweigh the benefits (especially when compared to the other
possible methods of backing up data). Removable hard drives are mounted
within a case on rails that allow it to slide into an opening on the computer.
Connectors at the back of the case allow for connectivity to the computer.
When you leave you can simply slide the hard drive out and take it with you.
Care must be taken with the drive to prevent rough handling.
Of these available devices, magnetic devices are the most popular. This
includes the DAT and DLT drives as well as portable drives such as Iomega’s
Zip and Jaz drives. These media hold large amounts of data (tens of gigabytes
on one cartridge), the replacement cost of the media is reasonable, and in
general the media are reliable.
Iomega offers products ideal for small-business and personal backup solu-
tions. Available as both an internal and external device, the Iomega Zip and
Jaz are affordable backup solutions. The Zip drive is available in 100MB
and 250MB options, while the Jaz is available in 1GB or 2GB sizes. Iomega
media is readily available at most retail outlet stores at a competitive price.
External drives interface through a USB port to the computer, and the
Device Capacity
CD-RW 650–800MB
DVD 4.7–17GB
Backup Methods
There are four different backup methods from which to choose
for your backup strategy. Each method has benefits and drawbacks.
Which method you choose depends on how much data needs to be backed
up each time and how much time is available for actually doing the backup.
Although it may seem best to do a full backup every day, it might not be
physically possible. Some backup software, for example, will not back up
files that are open or currently in use. If your backup takes several hours to
perform, then a backup during business hours will not work. If your business
works on a 24-hour basis, then a full backup every day is not possible and
you should choose another strategy. Let’s look at the available backup
methods.
Full Backups
A full backup is simple. Every time a backup is performed, all data is backed
up without skipping any files. The immediate benefit to this backup type
is that only one tape (or group of tapes if more than one is needed) is used.
Should the need arise to restore, only one tape (or group) must be located
and used. Full backups are the simplest type to perform, but they are also the
most time consuming because all data is being backed up every time the pro-
cess is performed. If you back up every day, then a full backup will occur
every day. With each of these daily full backups, information that has not
changed is also being backed up. For this reason, full backups are not the
most efficient. Figure 14.1 shows the data being backed up for a full backup
schedule. Each day of the week represents a typical backup schedule. As
most businesses operate on a Monday to Friday cycle, a full backup schedule
would involve performing a full backup on each of these days. Since the
office is not open on Saturday or Sunday there is no need to perform a
backup on these days. Notice that the amount of information being backed
up (expressed in gigabytes) is the same each day.
30GB
20GB
10GB
Differential Backups
With this backup strategy, a full backup is done periodically (usually once
a week), and a more-frequent backup (usually daily) is done only to those
files that have changed since the last full backup. Should there be a need to
restore, you will need the last full backup and the most recent differential
backup. The backup software uses an archive bit to achieve this. The archive
bit is an attribute that is modified each time a file is accessed. When the dif-
ferential backup starts, it looks for the changed archive bits and backs up
these files only. When the next full backup is performed, the archive bit is
cleared and the process starts over again. Figure 14.2 shows a differential
backup schedule. Notice that in a differential backup schedule the informa-
tion being backed up grows each day until the full backup is reached again.
A full backup has been performed on Monday; each other day is a differen-
tial backup.
30GB
20GB
10GB
Incremental Backups
An incremental backup schedule is the fastest backup option. Usually, a full
backup is scheduled for once a week, and only files that have changed since
the previous incremental backup (usually done daily) are backed up. The
archive is cleared each time a backup occurs. Since each day’s backup is dif-
ferent from other days, all tapes will be needed to restore the data. This is the
slowest restoration of the three backup methods. Figure 14.3 is an example
of an incremental backup schedule. In this example you can see that the
information backed up each day varies in gigabyte size. A full backup is per-
formed on Monday, and every other day of the week only the files that have
changed since the previous day are backed up.
30GB
20GB
10GB
Custom Backups
Many of the third-party software backup packages that are available offer
a custom option. With this option, you can customize which files will be
backed up and with which backup method. The benefit is that critical files
can be set to back up fully every time, and less-critical files can be backed up
incrementally or differentially.
Backup Plans
A lthough it would be ideal to keep a warehouse full of backup tapes,
so that every day of your company’s existence would be kept and catalogued
for future reference, there is obviously no way this could be a reality. Backup
media and storage space cost money. If you spent $40 for each tape and had
one tape for every day in a year, it would cost you $14,600. Even to have a
basic rotation of tapes to use on a regular schedule will cost a substantial
amount of money. How you plan your backup rotation will depend on
several factors, including available resources to purchase tapes.
Rotation Schedules
Rotating tapes is a method of reusing tapes in a predetermined cycle. This
ensures that data is kept for a period of time on tape and then, when that data
becomes dated or redundant, that tape can be reused for a new backup. There
are several common rotations available. Depending on your backup needs as
well as your need to refer to past data, tape rotations offer several possible
solutions.
Daily Rotation
Not surprisingly, daily rotation is not considered a suitable backup strategy.
In fact, there is no actual rotation that occurs at all. Unfortunately, in small
businesses, or environments where nontechnical employees take care of tech-
nology, it can happen. In a daily rotation the same tape is used every day.
The problem is that if data becomes corrupt there is no means of restoring
unless you catch the problem before the backup occurs. Daily rotation also
falls short in the fact that there is no offsite data safety. One of the key ele-
ments of a backup strategy is that there is a copy of data off site. In a daily
rotation, there is one tape and often it never leaves the tape drive.
Weekly Rotation
Weekly rotations are based on a set of tapes for a week. Each day of the week
gets a tape. When you reach the end of the week, you start over, reusing the
tape from the last week. For example, the Monday tape would be used on
Mondays only, and so forth. At best, you can restore back one business
week. The benefit of this backup rotation is that it requires only one tape for
each business day in the week. The downside is that you can’t go back in his-
tory very far. If today you notice a problem that damaged data more than a
week ago, you can’t restore the data and fix the problem.
Monthly Rotation
Monthly rotation involves a weekly rotation but, in addition, each Friday’s
tape is kept for an entire month. This allows you to go back to any week
within a one-month period. A total of nine tapes are used (Monday to Friday,
and four Fridays for the month). Monthly rotations are based around the idea
that any data errors requiring a restoration will be reported shortly after the
error occurred. If they occur within the week then a restore can take place
from the previous day. Otherwise the restoration will have to come from the
weekly backups, resulting in data since that last weekend backup being lost.
Yearly Rotation
The yearly rotation builds on the monthly rotation. Along with having daily
tapes for each weekday and weekly tapes for each Friday, you also keep the
last tape from each month for a year. This allows you to go back daily for a
week, weekly for a month, or monthly for a year.
It should go without saying that you should carefully label your tapes, but
many times people will not do this simple step. Carefully label each tape with
its position within the rotation. In a monthly rotation there will be several
Friday tapes (one for each Friday in the month) and each one will have to be
labeled with its position in the monthly rotation, e.g., 1st Friday, 2nd Friday,
and so on. You don’t want to start mixing the tapes up. Some backup soft-
ware actually identifies the tape and will not allow tapes to be used out of
order. If you bring the wrong tape in to work, you will have to make another
trip to your offsite storage place to locate the right one!
Grandfather-Father-Son Rotation
One of the more commonly used tape rotations is the GFS (grandfather-
father-son) strategy. Daily backups are known as the son. The last full
backup of the week (Friday, for example) is known as the father. The last
full backup of the month (which is kept in the yearly rotation) is known
as the grandfather. This rotation is based on the yearly rotation plan.
repeats every fourth backup session. Media set C starts on the first non-A or
non-B backup day and repeats every eighth session. Media set D starts on the
first non-A, non-B, or non-C backup day and repeats every sixteenth session.
Media set E alternates with media set D. With each additional media set added
to the rotation scheme, the backup history doubles. The frequently used media
sets have the most recent copies of a file, while less-frequently-used media
retain older versions. The decision regarding the frequency of rotation should
be based on the volume of data traffic. To maintain the required history of file
versions, a minimum of five media sets should be used in the weekly rotation
schedule, or eight for a daily rotation scheme. The Tower of Hanoi is the most
complex to use, but provides the best data protection.
Media Storage
Once you have decided on a backup rotation, you need to plan on the media
storage. Magnetic tapes are the most popular media in use today and need
careful storage. Much like the cassette tapes that preceded CDs in the music
industry, magnetic tapes are easily damaged by ultraviolet light, extreme
heat, extreme cold, and magnetic fields. Care must be taken in transporting
tapes. Don’t leave them where they will face any of the previously mentioned
dangers.
Ideally the first question will be, who is responsible for switching and
transporting the tapes? Where will the offsite storage be? What will happen
if the person responsible can’t perform the task? Obviously the traveling
salesperson is not an ideal candidate for this role because she is seldom in the
office. Also, more than one staff member should be aware of the offsite stor-
age site and have access to it in case the person primarily responsible for the
tapes is unavailable.
Your offsite storage site can be as simple as a safe, dry location within
your home, or as complex as a locking fireproof cabinet at a secure location.
How simple or complex will depend on the level of safety that your data
needs. The key is that the tapes do not stay in the same location as the server.
Remember the idea here is to have a safety net in place so if the server is
damaged, stolen, or destroyed, you still have your data.
The IT head at a large engineering firm had been in charge of the data
server for over three years when the server failed and the data on the
drives was corrupted. Unfortunately, he had not attempted to do a restore
before. When he started the restore, he realized that the data had not been
verified either. Even more problematic was the fact that the backup unit
had not been backing up properly to begin with. In the end, the engineering
firm lost drawings worth $2,000,000 and the IT head was fired. The moral
of the story is clear. Always check up on how your backup is doing. Just
because an error message doesn’t appear on your screen doesn’t mean
that everything is working as you expect.
also important. This will not only give you confidence that you know what
you are doing, but also reduce the chance that you will make a mistake when
it counts.
Backup Software
O riginally backup software was extremely basic. It would back up
files selected through a command line interface on a local computer only.
There were no options for network use, and no advanced options. The
Backup utility for Windows has evolved from a command line program to
a GUI program. NetWare includes a backup program called SBackup, and
Unix has a command-line-based backup program called Tar (short for tape
archiving). Although these backup programs will work, third-party pro-
grams offer more options. Figure 14.4 is a screen shot from Microsoft’s
Backup utility.
The backup software you select must be compatible with the drives and
perating system. Some devices come with their own backup software designed
specifically for the device. For example, Iomega drives come with proprietary
Iomega software.
Many of the third-party utilities will let you back up files that are secure
within a password-protected folder. This will be extremely important in
your server if you have folders for your users that are password-protected.
If you use a backup utility that is unable to gain access to these folders, the
data will not be backed up. Other features to be aware of are automatic ver-
ification of successful backups and notification of problems. Some of the
advanced programs can e-mail alerts when a possible error is detected with
the tape drive, while others offer data reduction tips, such as not backing up
scratch files, duplicate files, trash folders, or games. Although these features
come at a price, it may be worth the investment if you do not have the time
to regularly check up on the server.
Increasing in necessity and popularity are real-time 7×24 systems that will
back up open files Because all files are backed up, even those within appli-
cations that are running, ideally no data will be lost if a failure occurs. This
expensive solution is seen in high-availability servers.
Regardless of your backup device choice, a software program will be
needed to run the backup. There are many choices available in features
and prices. Select your software carefully to meet your current and
future needs.
Backup Troubleshooting
B ackup drives, no matter what type and form they are, contain
moving parts. With the nature of the environment that they operate in,
backup devices do require regular maintenance to prevent component fail-
ure. Backup software and media may also require troubleshooting in order
to maintain proper operation. Software may need to be updated as well as
matched appropriately with other operating software to ensure compatibil-
ity. Media will deteriorate over time and require replacement. This section
will explore common troubleshooting issues when dealing with backup
devices, software, and media.
Oftentimes a backup failure is due to a dirty tape drive. Clean the drive
and attempt a backup again. If the backup still fails, then it may be the actual
media that is the problem.
Media Problems
Magnetic tapes do not last forever. Eventually wear and tear will cause the
tape to stretch out or the magnetic film to stop responding to read/write
requests. A common problem with large-capacity magnetic media occurs
when only a portion of the tape is used. As the tape is partially used, and
rewound and used again, the tape tension within the cartridge becomes
inconsistent. This causes problems for the tape drive motor and gears as the
mechanism tries to roll the tape. With the varied tape tensions, the drive will
face uneven resistance, which can result in the tape jamming within the drive
or the tape wrapping around the spindles in the drive.
Other media problems can occur from improper storage. Tape that
has been stored at extreme temperatures (too hot or too cold), or in any
other environment that is unfriendly to magnetic media, can cause perfor-
mance problems.
Some drives support several types and sizes of media. Before you purchase
your media, confirm with the drive manufacturer the exact details of sup-
ported media. If you select an incorrect media, there is a chance that the
backup will not perform as expected.
Optical drive media (discs) are immune to the issues of magnetism seen
in the tape drives but face their own problems. Special care must be taken
when handling optical discs because the media communicate through light
sources and must be free from scratches, fingerprints, or any other visual
obstructions.
Hardware Problems
Most backup devices connect to the server through the SCSI bus. If the
tape drive is not working upon installation, there may be a problem with
the drive installation. Confirm the SCSI ID, LUN, cabling, and termination
(refer back to Chapter 4, “Storage Devices” for more information on SCSI).
If these configurations are not done properly, the backup device will not
operate properly. Confirm also that the drive has been cleaned recently.
As previously mentioned, a dirty drive might not perform as expected, and
drive failure is a possible result of continual use of a dirty drive. Drive heads
can become so contaminated that the dirt causes physical damage to the
drive components. Magnetic buildup on the drive’s internal components
can also lead to hardware problems, such as causing the tape to become
tangled within the drive.
Software Problems
Software problems often focus on the operating system’s interaction with the
backup drive. Depending on the type and version of operating system on
the server, specific drivers may be needed. The drivers provided with the
backup drive may also be outdated. Ideally you should check with the drive
manufacturer’s website to be sure you have the correct driver. Is there a new
driver available? If you update your operating system on the server, is there
a new driver available for the new operating system?
SCSI-based magnetic backup drives also contain firmware that might
need to be updated to ensure proper operation with both your operating
system and your selected backup software. It is advisable to check regularly
with the manufacturer of your drive for firmware updates. Many websites
offer the opportunity to join a mailing list for updates, which is a nice way
to keep informed of updates for your products.
Third-party software problems can also occur. Compatibility between the
hardware, operating system, and the third-party drivers can lead to prob-
lems. Before purchasing your software, confirm that it will be compatible
with your hardware and operating system.
Media Retirement
Because old tapes can cause a problem, tape retirement is a key element of
preventative troubleshooting. Magnetic tapes, cartridges, optical disks, and
other backup media all have a finite useful life. Check the backup media log
to determine the number of times that the media has been used. This should
then be compared with the media manufacturer’s recommendations for use.
Planning ahead for media retirement can prevent possible problems in
the future.
Summary
T his chapter explored the options for backing up data that is on
a server. Various devices are available to ensure data is safely copied and
stored. Backup hardware falls into one of two categories: optical and mag-
netic. Magnetic devices are currently the most popular.
Backup methods fall under four main types: full, differential, incremental,
and custom. Full backups do a complete backup of all files each time the
backup is run. Differential backups back up all files that have changed since
the last full backup. Incremental backups do a backup of all the files that have
changed since the last backup (whether full, differential, or incremental).
A custom backup requires you to manually specify how the backup runs,
and can comprise a blend of the other three methods.
Media rotation schemes specify which tapes are used at which times in
your backup strategy. Weekly rotation uses the least number of tapes and
allows for files to be restored from up to one week earlier. Monthly rotation
uses a weekly schedule but also keeps each end-of-week backup for a period
of one month, allowing for data to be restored from each week up to a one
month earlier. Yearly rotations build on the monthly rotation to keep the last
backup of each month for a period of one year. This method of rotating tapes
through a one-year cycle is often referred to as the GFS (grandfather-father-
son) rotation. The Tower of Hanoi is the most complex tape rotation plan,
but can provide the widest time range of backups.
Safe media storage is an essential element to a successful backup strategy.
Whether optical or magnetic, backup media need to be carefully handled
and stored. Media should be kept in a dry environment free from extreme
environment changes and dangers.
Once a backup strategy is implemented, periodically a restoration should
be attempted to ensure that a successful restoration could occur when needed.
Most backup software also provides data verification. Verification compares
the original data to the copy on the backup media to ensure that it was copied
correctly.
Backup utilities provided in the major operating systems include Backup
for Windows, SBackup for NetWare, and Tar for Unix. Tape drives often
come bundled with proprietary backup software. You can also purchase full-
featured backup software from third-party vendors.
Troubleshooting includes dealing with hardware, software, and configu-
ration problems with backup systems.
Preventative troubleshooting includes regular cleaning of magnetic heads
and planned media retirement. Monitoring the age and use of media will
minimize problems resulting from media failure.
Exam Essentials
Know the difference between archiving and backing up. Make sure
that you know the definitions of, and differences between, these two
key terms.
Know the different backup devices available. Be able to list the major
backup devices and their media capacities.
Explain the difference between a full, differential, incremental, and
custom backup. Know which backup methods clear the archive bit;
know how each method backs up data.
Explain the common rotation schedules. Know the difference between
a daily, weekly, monthly, and yearly rotation. Understand the terminol-
ogy in the grandfather-father-son rotation. Explain the Tower of Hanoi
tape rotation.
Explain effective media storage. Know the elements involved in effec-
tive media storage, including cost control, personnel management, and
safe offsite location.
Define verification and restoration. Know the difference between the
terms verification and restoration as they pertain to backing up data.
Know the backup software available in operating systems. Know that
Windows includes Backup, NetWare includes SBackup, and most Unix
distributions include Tar as backup software.
Be able to explain common troubleshooting for backups. This includes
SCSI device configuration, media problems, hardware problems (e.g.,
cleaning heads), and software problems.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
Review Questions
1. What is archiving?
2. What is a backup?
B. DLT drive
C. Floppy drive
D. Zip drive
A. 750MB
B. 850MB
C. 700MB
D. 1GB
A. 20–50GB
B. 35GB and up
C. 100GB
D. 20–50GB
A. 4.7–17GB
B. 20–50GB
C. 1–3GB
D. 650–800MB
10. Which of the following is the most commonly used backup device?
A. Optical device
11. A differential backup performed after the full backup will back up
what data?
A. All changed data since the last full backup.
B. Only the data that has changed since the last differential backup.
C. Only the data that has not changed since the last backup.
A. Yes
B. No
13. Which option best describes the files that will be backed up in an
incremental backup?
A. Data that has changed since the last full backup
14. Which backup method requires the least amount of tapes for a
restoration?
A. Full
B. Differential
C. Incremental
D. Custom
15. Which backup method requires the most tapes for a restoration?
A. Full
B. Differential
C. Incremental
D. Custom
B. Three
C. Five
D. One
17. In a monthly rotation what is the maximum time period that you can
go back for a restoration?
A. One month
B. One week
C. One day
18. The Tower of Hanoi rotation requires how many sets of media if
performing a weekly rotation?
A. One
B. Four
C. Five
D. Three
B. Backup
C. Unix backup
D. SBackup
7. A. The Iomega Jaz drive supports disk capacities of either 1GB or 2GB.
8. B. DLT drives will support media size of 35GB and larger. Currently
DLT drives support over 100GB on a single tape.
9. A. DVD media, although still in development and refinement, will
support media between 4.7 and 17GB.
10. D. Magnetic devices, due to their reliability and large media capacity,
are the most commonly used backup devices.
11. A. Differential backups focus on data that has changed since the last
full backup only.
12. A. Differential backups rely on an archive bit to monitor files that
have changed since the last full backup was performed.
13. B. Incremental backups back up data that has changed since the last
backup.
14. A. Full backups are done on a daily basis, so only one tape would be
required to restore.
15. C. Incremental backups require the most tapes for a restoration. You
would need the last full backup and every incremental backup since
the last full backup.
16. C. Provided that you back up on a Monday to Friday basis, you
would require five tapes.
17. A. With a monthly rotation, you can go back one month because the
tape for each Friday is kept for a one-month period.
18. C. The Tower of Hanoi requires a minimum of five media sets.
19. A. Unix operating systems use Tar as their default backup software.
20. B. Media retirement is planned removal of well-used media before it
leads to problems. An effective media retirement schedule relies on
careful monitoring of media age and use.
these steps is blurry. Some information presented in one step might just as
easily fit into another step, but here are the working categories:
Exploring the risks
Understanding the impact
Creating strategies
Training for strategy implementation
Plan Maintenance
Documentation
Creating Strategies
After determining the risks, assessing their impact, and prioritizing the list
based on severity of the risk, it is time to develop strategies to deal with each
disaster potential. This stage will involve creating, in some cases, several
possible solutions to deal with each risk. In the case of hardware failure it
would include the following possible strategies:
Having spare parts on hand
Establishing contact with a local supplier to determine availability of
parts and shipping timelines
Training staff to identify as well as effectively deal with the problem
For example, a printing company relies on specific printer hardware to
perform most of their printing tasks. Strategies to cope with printer failure
would include having a backup printer available and training staff on how
to connect to and use the secondary printer.
Oftentimes the strategy development stage is difficult to define. Exact
costs of parts will change with time. Even parts availability will change
dramatically within a few months. (See also “Plan Maintenance,” below.)
Careful consideration must be given to the possibility that a server contain-
ing dated hardware may not be easily repaired should a disaster strike. Older
components may not be readily available, and if located these parts may also
be quite costly. This can be seen in the world of RAM. EDO RAM, which
was used prior to SD RAM, is rather difficult to obtain today, as it is no
longer stocked at most computer stores. The price of EDO RAM, as com-
pared to SD RAM, is extremely high. For example, current prices for 128MB
of SD RAM is $39.95, while only 32MB of EDO RAM is currently $74.95.
I can’t even find a price for 128MB of EDO for comparison!
Another consideration is involvement of staff at this stage. Determining
which staff will be involved and to what degree can be a difficult task.
Obviously there will need to be people involved in carrying out the disaster
recovery plan. Technical knowledge, leadership, as well as other favorable
traits should be carefully considered.
Plan Maintenance
Maintenance of the plan involves frequent plan visitation to update on
newly identified potential disasters as well as changes within the business
that would warrant updates. Changes that might warrant a plan update
would include introduction of new software and hardware to the network,
expansion of the business, new staff, or new technology use. The frequency
of plan maintenance will be determined by the business. If there are no
changes to the business and no foreseeable new disaster threats, then
maintenance to the plan will be limited.
This step should also include verification of available hardware and
software used within the disaster recovery plan. For example a slightly
outdated plan might specify that a hard disk failure would require the
purchase of a new 2GB hard disk. The price and availability of this hard
disk will be significantly different within six months of the plan creation.
If this plan were still in use today without any update, it would be imprac-
tical and perhaps impossible to fulfill because a new 2GB hard disk is no
longer available.
Documentation
Documentation is the last step in creating an effective disaster recovery plan.
Without clear documentation chaos would occur. When a severe disaster
strikes, people normally panic. Having clear documentation will serve as
a visual reminder of the steps that need to be taken, but also will help in
restoring some form of order to an environment that seems out of control.
Documentation should include detailed steps to carry out the plan, as
well as copies of all updates to the plan. Having a list of plan updates will
allow you to track all changes that have been made, as well as allow for
acceptance or rejection of the changes. This way, when the plan has to be
carried out, reference to previous revisions can be made.
Copies of the plan documentation should also be made. These copies
should include hard copies, digital copies, and backup copies located offsite.
This will ensure that no matter what the nature of the disaster, there will still
be access to the plan.
Hot Sites
A hot backup site is an exact copy of the original business located in a
different physical space. This includes equipment, software, and data.
Although a hot site is the ideal situation, it is extremely expensive to create
and maintain. The costs would include everything from renting/purchasing
Cold Sites
Cold backup sites normally have equipment available but availability is not
at the level of a hot site. Computers will be available but not at the identical
configuration to the primary systems. The idea is that there will be enough
equipment to run the business but at a basic state until the primary facility
can be repaired. An example would be using a simple dial-up Internet
connection as opposed to a high-speed link that would be available at the
primary site.
Cold sites are a much more affordable option. However, the transition
between the primary and cold site is more difficult, often requiring that
computers be set up and configured, and patches be applied as necessary.
Often cold sites are not maintained regularly. This can lead to unpleasant
surprises, including hardware and software conflicts.
The benefit of cold sites over hot sites is affordability. Eliminating the
cost of regular maintenance and equipment matching makes cold sites
substantially more affordable and popular.
Once the problem is identified, corrective measures can then take place.
If the hardware has indeed failed, replacement is the only option. At this
point a full backup should be done. This will ensure that any unforeseeable
problems that may occur as a result of the hardware repair will not result
in any data loss.
It is amazing the risks that technicians often take. A simple update such
as installing a utility can result in catastrophic consequences. One of the
servers I once worked on had experienced a problem with a tape drive.
Upon consultation with the server manufacturer, I decided to install a
utility program that I had downloaded from the manufacturer’s website;
the utility would do a diagnostic on the tape drive as well as the SCSI channel
that the drive was connected to. When I performed this task, the server (run-
ning Windows NT Server) did a physical memory dump and basically died
right before my eyes. The entire operating system became corrupt and
the system failed. Fortunately I had backed up the server the night before
and there was no data loss. The server had to be reinstalled with the oper-
ating system and a restore from the backup tape to return the data as well
as security on files and folders. Had there not been a backup done, I would
have had to re-input all the users as well as set up their access and permis-
sions. Remember, it doesn’t matter whether you are doing a major system
upgrade, installing a patch file, or even doing diagnostics: always perform
a full data backup. Whatever can go wrong often does when you least
expect it.
ESD Mat
A major problem with the ESD strap is that the cable tethers one of your
hands. This will limit your mobility, and at times, the cable will become
entangled with the computer that you are working on. It is also possible (and
often does happen) for the wire to become disconnected from the ground
source. Many ESD straps use a coiled wire that does not provide much
distance between you and the outlet. An ESD mat is a better solution. This
mat is actually a rubber mat with the resistor and cable attached to it. It
grounds the computer against damage from both human contact as well as
charge from the table.
Oftentimes an ESD wrist strap is used in conjunction with an ESD mat.
This will provide protection from both human and tabletop ESD contact.
Antistatic Bags
All computer components are shipped within a special antistatic bag. These
special bags prevent static charge from coming into contact with the com-
puter components. It is highly advisable to keep these bags. At some point
you may need to transport a component or remove one temporarily. Having
an antistatic bag to store the component in will protect the component while
it is outside of the computer.
Antistatic Bags
Antistatic bags can also be used as gloves. Since the bag prevents the
transfer of static charge, technicians have used them as gloves when they
need to get a better grip on a component. An example of this use is install-
ing expansion cards. At times considerable force is needed to seat the card
properly in the slot. Using the antistatic bag as a glove will allow you to
manipulate the card with a better grip. Once an antistatic bag has been used
for this purpose, it should not be used to store computer components in.
The interior of the bag can contain residual charges from your hands.
Notification
If the repair to the server cannot occur during a time where there will be
no access, then proper notification will need to take place. This includes
advanced notification to all people who will be affected by the shutdown.
With this notification include the nature of the repair, the fact that the server
will be inaccessible, the start time, and estimated completion time. Advanced
notification should be done at least a week prior to the repair. This notifica-
tion can be done via in-house e-mail, regular in-house mail/fax, or verbally.
Regardless of the delivery method, you should include a verification process
to ensure that all intended recipients received and understood the notifica-
tion. Should there be any discrepancy at a later date, you will have proof
that the notice went out and was acknowledged by all intended people.
Finally, a second notification, much like a reminder, should be sent 24 hours
before the repair is to take place. This will ensure that those people who
forgot will be reminded and plan their day accordingly.
Software Failures
Software failures can also have a significant impact on server
operations. Before applying any update to a server, thorough testing should
be done. The unforeseeable impact on both the operating system as well as
installed programs can be severe. In an ideal environment a separate test
server would be used to verify every patch, every new software installation,
and every firmware update before they are installed on the production server
but it is rarely possible to afford an exact copy of the production server to
use for testing purposes. Carefully researching the software compatibility
issues before installation will normally reveal any potential negative conse-
quences. Before installing or applying any software update, you should do
a full backup.
Before installing any update, first consider the reason for performing
the update. Is the update needed? It is not necessary to install every avail-
able update. If there are no benefits or fixes that are going to provide a
positive impact for your environment, then why take the risk? Refer back
to Chapter 11, “Software Updates,” for more information.
Summary
T his chapter explored the development and implementation of a
disaster recovery plan, including defining and creating an effective means of
dealing with both minor and severe system failures.
Development of a disaster recovery plan includes exploring the risks,
understanding the impact, creating strategies, training staff, implementing
the plan, maintaining the plan, and documenting the plan.
When deciding on how to deal with a severe disaster, consideration must
be given to creation of a hot backup site, a cooperative hot backup site, or
a cold backup site. A hot site will be an exact copy of the original site, includ-
ing hardware, software, and operating facility. This is the most expensive
and complex method to set up and maintain but provides the easiest transi-
tion should the need arise to use it. A cooperative site is a hot site created
and shared by several companies with the intent to share the cost. Cold sites
contain equipment for use but often at a more affordable level than the
equipment used at the primary site. Cold sites often require quick setup
and adjustments in order to make them functional.
Replacing failed hardware is the most common service performed with
a disaster recovery. In order to successfully accomplish this task, you must
take several steps first. First you must identify the failure. Modern technol-
ogy can assist with this task, providing servers with a means of monitoring
and displaying the status of the hardware. Once the hardware failure is
determined, you must plan for the repair. Planning also includes being
constantly aware of ESD and the dangers it poses to computer components.
Exam Essentials
Know the two broad categories of possible disasters. This includes
natural and non-natural disasters.
Be able to provide examples of both types of disasters. Natural disas-
ters result from naturally occurring and unpreventable events such as
tornado and flood, while non-natural disasters include vandalism, theft,
and user error.
Be able to identify key elements of a disaster recovery plan. Know the
steps involved in creating a disaster plan including exploring the risks,
understanding the impact, creating strategies, training for strategy imple-
mentation, plan maintenance, and documentation.
Know the difference between hot, cooperative, and cold backup sites.
Hot sites are exact copies of the primary site, cooperative sites are hot sites
shared between businesses, cold sites contains minimal equipment will
need some configuration before getting up and running.
Know the steps to identify and replace failed hardware. This includes
identifying the failure through visual as well as notification cues, planning
the replacement, locating suitable parts, assessing the complexity of the
replacement, planning the replacement time, and notification.
Key Terms
Before you take the exam, be certain you are familiar with the
following terms:
Review Questions
1. Which of the following is an example of a natural disaster?
A. Electrical fire
B. Theft
C. User error
D. Flood
A. Tornado
B. Flood
C. Electrical fire
D. Lightning strike
B. Non-natural
B. Six
C. Seven
D. Eight
C. Creating strategies
B. In case the person in charge is not available to carry out the plan
11. What format should copies of a disaster recovery plan be kept in?
A. Hard copy
B. Digital copy
C. Backup copy
B. A copy of the primary site that will require setup before use
C. A shared site between several companies
14. Do cold backup sites require any setup or configuration before use?
A. Yes
B. No
C. Depending on whether they are also cooperative backup sites
15. What is the most common form of disaster that a technician will have
to deal with?
A. Software failure
B. User error
C. Hardware failure
D. Natural disasters
A. LED displays
B. E-mail alerts
D. Text alerts
B. Electro-shock discharge
C. Electrostatic discharge
D. Environmental static discharge
A. 30 volts
B. 1,000 volts
C. 50 volts
D. 500 volts
19. What does an ESD wrist strap or mat contain that removes ESD
charges safely?
A. Transistor
B. Diode
C. Capacitor
D. Resistor
20. Computer components that are not within the computer should
be stored in which of the following options?
A. A cardboard box
B. An antistatic bag
C. A bag
D. Any protective container
11. D. Ideally you should keep your disaster recovery plan in as many
different formats as possible. This will ensure that if you need to, you
can resort to several different means of accessing the plan.
12. A. A hot backup site is an exact copy of the primary site, including
equipment, and software. Hot backup sites are the most costly to
create and maintain, but they provide for the smoothest transition.
13. B. In a cooperative backup site one backup facility is shared between
several businesses. Creating an environment that will meet the needs
of all partnered businesses is the major stumbling block.
14. A. Cold backup sites will require setup of equipment as well as
configuration of software and data before they can be used.
15. C. Hardware failures are the most common form of disaster. Servers
operate within a stressful environment, and with time hardware com-
ponents will fail and require replacement.
16. B. Modern servers have the ability to send e-mail alerts to report on
potential problems with the operation of the server. This is a form of
remote notification.
17. C. ESD is the acronym for electrostatic discharge. ESD is a serious
concern when performing a hardware upgrade or replacement. Static
discharge can damage or destroy computer components without the
technician even knowing a discharge has happened.
18. A. It can take as little as 30 volts to seriously damage or destroy a
computer component. At this small of a voltage you would not even be
aware that the damage has occurred.
19. D. An ESD wrist strap and mat contain a resistor that slowly bleeds
any charge away from you and the computer to a ground source.
20. B. Components that are not installed in the computer should be
stored in an antistatic bag to prevent ESD damage.
VI
Troubleshooting Steps
P inpointing a problem can be very difficult but following some simple
procedures can make the troubleshooting process much easier. Experienced
administrators won’t just rely on their own knowledge to solve a problem
but will use all the resources available to them to help achieve an accurate
diagnosis. This includes questioning users on the network, interpreting the
information in server logs, and even using your own senses.
Once a problem has been detected, following the steps outlined below can
ensure proper diagnosis and resolution of the problem at hand.
1. Determine the priority of the problem.
Gather Information
There are many ways that a server administrator can acquire information
pertaining to a reported problem. This can be done in a variety of ways but
two of the most common and efficient ways is to ask questions and use the
information in the error/event log.
Ask Questions
When a problem occurs on a server, usually the first people to notice it or
feel the effects of it are the users on the network. Dealing with users can be
Interpret Logs
Logs are essential in ensuring that your servers run successfully. Logs list
events that have occurred and record activity on your server. Entries in a log
file usually report errors and warnings generated, when they occurred, an
event ID, and sometimes an event description.
Log files can be overwhelming to a server administrator because of their
size and complexity. When troubleshooting, log files are one of the first
places you should look for information on the problem. Some log files will
provide a description of the event while others will only provide you with an
event or error code. Even if you cannot understand the information within
a log, vendor support personnel should be familiar with the error messages
and error codes. Some websites will also allow you to type in the error or
event code or the description and provide you with details on the cause of the
problem and the solution.
Most operating systems, services, and applications will write events to
some sort of log file. Where the logs are located and their format will of
course depend on the operating system. For example, NetWare servers write
The System Log displayed within Microsoft’s Event Viewer gives you the
date and time that the error or warning occurred, the source of the error or
warning (which server component generated the event), an event ID, and a
description of the event.
You will recall from earlier discussions that excessive heat can have a
negative impact on a server, and that servers should be placed in a room
where the temperature can be carefully controlled and monitored. When
troubleshooting any server-related problems, check the temperature of the
server room. There are two issues to be aware of when considering heat.
First, servers and components generate a fair amount of heat. If a fan in a
power supply goes out, the box will let off a lot of heat. Or if a CPU fan goes,
you may notice erratic behavior in your server, such as the icons disappear-
ing from the desktop (this can obviously be more difficult to detect). If the
server itself is overheating internally, it can also cause hardware components
to fail. The second issue involves a fluctuation in room temperature, which
can cause components to physically shift and lose their connection (which
could be the cause of any hardware-related problems you may be having).
I have yet to meet an experienced server administrator who does not consult
some documentation, such as a knowledge base, to solve some server related
problems.
Search engines
Readme files
Newsgroups
Chat rooms
This is not to say that every time a problem arises you should call for sup-
port. If you can afford the time, first consult the documentation, help files,
and any web resources you can find. If you are unable to locate a solution,
then your next step should be to get help from a more seasoned server admin-
istrator or the manufacturer’s support team.
So once a fix has been found, implemented, and tested, you must docu-
ment everything from start to finish. Your documentation should include the
following information:
Description
Cause
Timeframe
Symptoms
Any error messages generated
Solution
Person responsible for troubleshooting
Any configuration changes made as a result of implementing the
solution
Log It!
Documenting may seem like a tedious task but it can make everyone’s job
just a little bit easier. When I look back and compare two technical support
teams that I was a part of, I can really see its importance. One was very
relaxed in making changes to the servers. If problems arose, no one was
notified and whoever was around fixed the problem with no documentation
left behind. This led to chaos, confusion, finger pointing, and what seemed
to me like an increase in server downtime. It was difficult to pinpoint what
configuration changes were made, and there were times when these
changes brought both the server and network down. Being in the dark about
what the technician before you did made it really difficult to troubleshoot.
The other team was far stricter about who did what, how it was done, and
what documentation they kept. Upgrades and troubleshooting were far less
chaotic and more often successful. If any one of the technical staff needed
to know what was done on the server last, finding out was as simple as look-
ing in our server maintenance log. It included a detailed description of the
problem, screen shots of any error messages, the step-by-step solution,
and who performed it. As far as I am concerned, you can never document
too much when it comes to server troubleshooting.
Summary
We began by defining the term troubleshooting. Understanding the
troubleshooting process can make problem detection and resolution much
easier and reduce server downtime.
Troubleshooting a server problem begins with assigning the problem a
priority. Server administrators need to determine which problems need to be
dealt with immediately and which can be put on hold for a short time.
Gathering information on a reported issue entails determining whether
the issue stems from software, hardware, or virus trouble. To get basic
information, ask questions of users on the network. Error/event logs record
precise information about server events, so this is one of the first places to
look for details about a problem.
Using your senses can provide valuable information when troubleshoot-
ing. Assess the server environment for any unusual sounds or smells. A visual
inspection of cables and connectors should also be performed to eliminate
these as the source of the problem.
Once the source has been determined, the person/persons responsible for
troubleshooting in this area should be notified. Large networks will most
likely be administered by experts in different technical areas. The appropriate
individual, the one responsible for providing support in the area of concern,
should be notified.
An abundance of technical resources assist server administrators in
troubleshooting. You should be familiar with some of the basic resources
available and how to access them. Possible resources include telephone sup-
port, readme files, technical support CDs, technical support websites, and
newsgroups. Any one of these or any combination can be used to determine
a solution to the problem.
In some situations it may be necessary to ask for assistance to get the
server functioning as it should. This may include looking to the manufac-
turer’s or vendor’s support personnel for assistance or asking another
administrator on staff.
Finally, the last step in the troubleshooting process is to document. Doc-
umentation should include a description of the problem, any error messages
that were generated, step-by step instructions for implementing the solution,
names of personnel performing the troubleshooting, and any configuration
changes made as a result of implementing the solution.
Exam Essentials
Know the general troubleshooting procedures. The steps to follow
when troubleshooting a problem will increase the likelihood of correctly
identifying the problem and finding a solution.
Know how to gather information about a reported problem. Problems
can be hardware, software, or virus related. Gathering information allows
you to appropriately diagnose the problem.
Know how to use error/event logs. Event logs, such as the System
Log in Windows Event Viewer, maintain information about events and
actions occurring on a server. They can provide detailed information on
any warnings or alerts caused by malfunctioning hardware or software.
Know how to use your senses. Using your senses of smell, hearing,
and sight when assessing the server environment can help you detect any
physical problems.
Know how to locate resources. Abundant troubleshooting resources
are available to a server administrator.
Know how to document solutions. Include all steps performed and any
problems encountered.
Key Terms
Before you take the exam, be certain you are familiar with the
following terms:
Review Questions
1. When you are troubleshooting a server problem or error, where
should you look to gather information?
A. Network users
B. Technical support CD
C. Error/event logs
D. Online support
E. All of the above
5. What are some common methods or resources you can use to gather
information about a server-related problem?
A. Event logs
B. Maintenance logs
C. Asking questions
B. Talk to the people involved with the server to see if a change has
been made.
C. Reboot the server.
A. Error/event logs
B. Network connections
C. Server I/O
D. System documentation
9. What are some resources that you can use to diagnose and solve
problems?
A. Friends and coworkers in the business.
B. Online knowledge bases.
C. System documentation.
10. Last night you were working on a server, changing out its NIC. Coin-
cidentally, the network team was working on a router at the same
time. When you left, the NIC seemed to be working OK. When you
came into work today, you were bombarded with complaints that no
one could reach the server. Where should you begin checking first?
A. Check with the network team to see what changes were made.
11. You are just starting a new server administration job when a server
fails and the reason it failed isn’t evident through conventional
troubleshooting techniques. Where should you begin checking first?
A. Check previous administrator’s documentation.
12. Your server is running Windows 2000 Advanced Server and has been
freezing lately during certain operations. Where should you go first to
find support on this issue?
B. Newsgroups
C. Server documentation
13. You need to look for a new device driver. Where should you begin
looking?
A. Newsgroups
B. Call support
C. Manufacturer’s website
D. Knowledge base
C. Verify
D. Document
C. Right-clicking My Computer
D. Network Neighborhood
A. Allow you to connect with other users who may have experienced
the same or a similar problem
B. Provide updated device drivers
17. There was a problem with one of your network servers. Over the
weekend another network administrator solved the problem. On
Monday when you arrive at work the server is down. What is your
first step?
A. Call for technical support
D. Question users
C. Resource kits
D. Source
10. A. First check with the network team to see what configuration
changes were made during the evening.
11. A. Answer C is wrong because the question states that conventional
troubleshooting has not yielded any information. Begin by checking to
see if the previous administrator left any documentation on this sys-
tem. Checking the system documentation is a good idea, but probably
not your first choice.
12. A. The first place that you should look for a solution to the problem
is Microsoft’s Knowledge Base, which can be accessed from their sup-
port site or on the Technet CD.
13. C. The best place to look for a new device driver is the manufac-
turer’s website.
14. D. The final step after resolving any server-related problems is to
document the entire process.
15. A. The System Log can be accessed with Start Settings Control
Panel Administrative Tools Event Viewer System Log.
16. A. Newsgroups provide of way for a server administrator to connect
with other users who may have experienced the same or a similar
problem. Keep in mind that the information on newsgroups is not
always accurate but it can be used as a starting point.
17. B. The first thing you should do is review the other administrator’s
documentation so you have an idea as to what was done to the server
over the weekend.
18. A, B, C, D. All of the above are available with a yearly subscription
to Microsoft’s TechNet.
19. A, B, D. The System Log within the Event Viewer tells you the date
and time the error occurred, the Error ID, and also the component that
generated the error (source).
20. B. Error/event logs are used to log events and actions that occur on a
server, such as error or warning messages that are generated by server
components.
Common Issues
I once worked for a computer company that said, “We do not have
problems. We have issues and situations.” In fact, if you were caught calling
the customer’s issue (or situation) a problem, you could be seriously repri-
manded. It’s all a matter of semantics really, but there are negative conno-
tations with using the word “problem.” When working on computers, there
are some areas in which problems—er, issues—commonly occur.
Bottlenecks
In the computing world, it seems that bottlenecks are everywhere. A bottle-
neck is defined as a limiting system resource. It’s the component or compo-
nents that slow the machine down. If you have a nice 1.5GHz processor in
your machine, but only have 32 megs of RAM, obviously something is
wrong. More than likely memory will be your bottleneck. Bottlenecks can
occur because of virtually any component in your system. It could be your
motherboard, processor, memory, I/O devices like network cards and
modems, or even network cables.
You will never eliminate all bottlenecks within your computer. I know
that sounds pessimistic, but it’s generally true. If memory is your bottleneck
and you upgrade your RAM, another component will now be the slow one,
like your motherboard. Since you cannot completely get rid of bottlenecks,
the goal is to minimize them to the point where they’re not a nuisance. Ide-
ally, the computer should always be waiting for the human to act next. We
should slow the computer down, not the other way around.
Identifying Bottlenecks
So now that you know what a bottleneck is, it’s time to track them down.
There are many diagnostic utilities on the market that will help you in this
pursuit. But before you go spend money on additional software, see what is
currently at your disposal.
First of all, don’t forget your senses. Does the computer seem slow? If not,
and you are always happy with its performance, don’t bother looking for a
problem. Chances are, if your computer’s performance is always great, the
bottlenecks you do have are not an issue. Go enjoy another cup of coffee. If
the machine does seem slow, then you want to take a look to find out why.
That’s where troubleshooting tools can greatly come in handy. Someone
may tell you that the server seems slow, but how do you know if they’re right
or not? Troubleshoot it with monitoring software.
By using your diagnostic software, you can get a snapshot of how the system
is performing at any given moment. Oftentimes, you can look at processor,
Acceptable Performance
Resource monitoring tools are great for showing you how your server is per-
forming, at least based on the numbers. System Monitor may tell you that
your processor is working at about 50 percent capacity, there is plentiful
memory, and the hard disks have about 20 percent free space. Are those
numbers good, bad, or do they even matter?
Here are some guidelines as to generally recognized acceptable perfor-
mance numbers:
Processor: Under 80 percent utilization
Memory: Under 75 percent utilization, and no more than 20–30 pages
per second written to virtual memory
Fixing Bottlenecks
Identifying the limiting system resource is only half of the battle. Once you
have located the component that is slowing down your system, you need to
do something about it. Many operating systems, such as NetWare 5, are
excellent at self-optimization. However, there is only so much optimizing an
operating system can do with limited physical resources.
Perhaps the most obvious solution is to upgrade or replace the hardware.
If your server’s memory is overtaxed, get more RAM. If the hard disks are
full, get new ones. If the processors are working overtime, get more or faster
processors.
If you can’t upgrade the server’s hardware, consider offloading some
of its services onto another machine. Even though this one server may be
overworked, you might have another server that is relatively idle. If you can
balance the server workload across your entire network, everything will
run much more efficiently.
Unhealthy Results
A worse-case scenario is when you make a configuration change to your
server and something bad happens. Obviously, you need to find a way to get
the server back in working order quickly. How do you proceed?
It all depends on what change you made and what operating system you
have. For failed configuration changes in Windows 2000, you can boot into
Safe Mode. Safe Mode doesn’t load very many drivers and services—only the
keyboard, mouse, and basic video. You can then remove the problem com-
ponent, and reboot the server normally. If you can’t even get to the boot
menu, you will need to boot from a floppy disk.
Hardware Issues
Underneath all of your operating system fluff lies the hardware. Most of the
time, we don’t think much about it. The server runs, and we leave the hard-
ware alone. When you have a hardware failure though, life can become
stressful.
If you have failed hardware, it’s important to isolate the piece of hardware
that failed and replace it as quickly as possible. Sometimes isolating the hard-
ware failure is easy. If there is smoke pouring out of your power supply, you
have a good indication that something is wrong. Other times, it’s not so easy
to figure out what’s wrong. If you can’t access a hard drive, it could be the
drive itself, or the controller that it’s plugged into. Logic can help solve this
too. If other drives that are plugged into the controller are still working,
chances are the controller is fine. If you are ever unsure of what part failed,
try replacing it and see if the replacement works. If not, then you were prob-
ably wrong about what failed. Try replacing something else.
Keeping hot pluggable hardware around is a good idea. If you can keep an
extra hard drive (or two), motherboard, processor, memory, power supply,
and network card around, you can often replace the part quickly. Keeping
extra hardware around can get expensive, however, so it’s not always feasible.
If you can’t do it, just make sure you have quick access to replacement parts.
Some parts are easily replaceable, and others are not. It depends on
the part you are dealing with. Parts that are easily replaced in the field are
referred to as field replaceable units (FRUs). Some examples of FRUs are
hard disks, motherboards, power supplies, network cards, and video cards.
Individual chips on the motherboard (except for the BIOS) are not often
considered to be an FRU.
There is one more important thing to remember about hardware. Not all
of it works with all operating systems. Be sure to check your operating sys-
tem manufacturer’s website to see if the hardware you have (or want to pur-
chase) works with their operating system. Most NOS vendors have a
hardware compatibility list (HCL) that lists hardware that is known to work
with the OS. If your hardware is not on the list, you could be in for difficult
times getting the piece to work. Also check the hardware vendor’s website
for drivers that are specific to your operating system. If they don’t exist,
getting the hardware to work with your operating system does not look
promising.
Troubleshooting Hardware
When we got there, sure enough, the e-mail server was down. So was the
air conditioning. Heat sensors were flashing everywhere. One server rack
was completely powerless, and the other was complaining about the con-
ditions. The young consultant started ripping at cords, trying to get rid of
the defunct UPS. I stood by and watched as an amused observer. He found
more power outlets, and started plugging items (including the e-mail
server) back in.
I looked around at the rack for a few minutes, and politely asked him what
“that little black box that doesn’t have any lights on it” was. It was the CSU/
DSU for the incoming T1 connection, and he had neglected to plug it back
in. As soon as we powered the CSU/DSU up, magically, everything worked
again. We had an external connection.
The moral of the story: Always, always check your hardware connections
first, before you start troubleshooting everything else.
moments. There are times though when a computer gets improperly shut
down, or crashes unexpectedly, and corrupt files appear. No matter how you
acquired the corrupt files, they all need to be fixed.
Applications installed on Windows 2000 Server may be able to automat-
ically repair themselves if they were installed through Add/Remove Pro-
grams in Control Panel. If an application was installed with an .msi file, it
can detect the corrupt file (if it’s directly related to the operation of the pro-
gram), and repair it automatically.
In the case of data files, the only reliable way to retrieve them is to restore
them from a backup. If files are important to you, back them up early and
often. If you are the network administrator, it’s one of your main responsi-
bilities to back up the servers that store user data. Not having an adequate
backup to corrupted files can be disastrous.
Viruses come in many shapes and sizes. They vary from simple joke
viruses to worms, Trojan horses, and polymorphic stealth viruses. Some just
make your screen look funny. Others corrupt or delete files, while some can
even wipe out your whole machine. On a server, even a “joke” virus is no
laughing matter.
The only surefire way to avoid a virus is to avoid the Internet, and keep
all diskettes away from the machine. Better yet, don’t install any software
at all, and leave the computer powered off. That strategy defeats the purpose
of having a network at all, and makes your server quite useless.
All network servers should be running an antivirus software like Norton
AntiVirus or McAfee VirusScan. Regular virus scans should be performed
on the servers (and all clients if possible), and infected files must be imme-
diately cleaned or destroyed. Most antivirus programs have configurable
options designed to clean or destroy infected files automatically. Also, a
good antivirus program will periodically download new virus updates from
the Internet.
If your server does get infected with a virus, and data is deleted or
destroyed, you need to start from scratch. Completely wipe out all drives on
the server, and restore from tape backup. With any luck, the tape backup
will not be infected. Also, be sure to warn all clients on your network about
the virus. Educate your users about the dangers of viruses, and how to best
avoid them.
Incompatibility Issues
Incompatibility issues with computers can involve both hardware and soft-
ware. The most common hardware incompatibility issue involves RAM.
When you upgrade the RAM in your computer, make sure to get the same
brand that you originally purchased. If possible, get the same exact type as
well, including speed and size. This can sometimes be a difficult task, espe-
cially if you bought the machine already assembled. In that case, contact the
manufacturer of your server and see if you can purchase the memory directly
from them.
Another common hardware incompatibility issue is with the processor.
Some applications and operating systems will not work with certain types of
processors. Be sure to check your NOS documentation before purchasing the
hardware. As an example, Windows NT Server will work with a MIPS pro-
cessor, but Windows 2000 Server will not.
Occasionally you will run across issues with hard drive incompatibility
as well. When mixing brands of hard drives on the same cable, whether the
drives are IDE or SCSI, you can encounter problems. In the case of IDE, try
to reverse the master/slave relationship to resolve the issue. If that does not
work, put the drives on separate controllers if possible. With SCSI, you often
have fewer problems between drive manufacturers. However, if you have a
problem you can try changing the SCSI ID on the drive to fix it.
Software incompatibility can drive a network administrator crazy. It rears
its ugly head when you install one application, and then find out that another
application ceases to function because of it. Fortunately, this does not seem
to be as common a problem as it used to be.
If you do encounter software incompatibility issues, you need to remove
one of the applications. Install the application on another server, and test to
see if everything works okay. If you do not have another server to install the
application on, then see if the vendor of either program has a workaround
for the problem. Give their website a look, or call their technical support for
assistance.
Another software compatibility issue is when the application does not run
on your operating system. This one should be easy to avoid. Just make sure
that the application was designed for use with your operating system before
you purchase it.
Logged events will fall under one of three categories: information, warn-
ing, and error. Informational events do not need to be acted on. They are
there for your general benefit. Warnings have a yellow sign with an excla-
mation point next to them. They indicate potential problems. They are gen-
erated when an error happens, but the error was not critical to the operation
of the server. Errors have a red stop sign next to them, and these are what
you are looking for when troubleshooting problems. Figure 17.5 illustrates
information, warning, and error signs.
Each log has a specific purpose, and each can help you troubleshoot very
specific server problems:
System Log The System log is enabled by default. It will display events
related to the startup and shutdown of the server, as well as server related
services like DHCP, DNS, WINS, and RRAS.
Application Log The Application log, as its name indicates, logs errors
dealing specifically with applications. Exchange Server, SQL Server, and
other applications generate errors into this log.
Security Log If you have auditing enabled on your server, you will gen-
erate messages in the Security log. Note that the Security log does not use
the information, warning, and error symbols. Rather, it uses two icons: a
padlock for an unsuccessful security attempt (such as a failed logon), and
a key for a successful security attempt (like accessing a file to which one
has permissions).
By double-clicking an event, you can see details about the event, such as
the computer it took place on, the user that was involved, the time it took
place, and an error code. Figure 17.6 shows the details of an event.
Although the bottom pane of the event detail box gives you information
in common language, the information that it provides may not be very help-
ful. You can search for the Event ID listed on Microsoft’s Knowledge Base
site for more-useful information.
If you have configured a domain and Active Directory, you will also have
a Directory Service log and File Replication Service log. Some other services,
like DNS, will install their own log as well.
Windows NT Server also comes with Event Viewer, but it is limited to the
System, Application, and Security log files.
the server has experienced, and the access log tracks all requests made of the
server, and the responses made by the server. Novell allows you to customize
what is included in the access log file. A log analyzer is provided to generate
server statistics.
Using Documentation
I t’s sad to say, but most of us are reluctant to crack open a manual in
an attempt to solve a problem. If we do, it’s the last resort. We’ll open the
book only after we’ve asked all of our friends, consulted a psychic, and sent
an instant message to grandma asking for advice. We think that using doc-
umentation is a sign of weakness. It’s really not.
With so much information and so many products out there, it’s impossi-
ble for one person to know everything. Fortunately for us, the people that
produce these products know a great deal about how they work, and are
willing to write it all down on a website or technical manual. You won’t
always find the answer you’re looking for directly out of the manuals. But
even if you don’t, they will lead you in the right direction to solve the prob-
lem, or give you an idea as to what else to try.
The documentation you will use falls under two general categories:
product documentation and previous technical documents about the spe-
cific device.
Product Documentation
Why does your VCR still flash 12:00? More importantly, where is the man-
ual that tells you how to set the time? It may be a natural tendency to discard
manuals when we open new toys (and server hardware qualifies as toys, for
purposes of this discussion). However, it can be much easier to configure the
device and troubleshoot it if you have problems when the manual is handy.
Besides, if you configure it right the first time, it’s less likely that you will run
into problems and need the manual again.
Many companies have a bookshelf or filing cabinet set aside for manuals.
If you don’t have a central repository for technical manuals, it’s about time
to get one. It would be tragic for you to get up the courage to crack a book
open, and then not be able to find the book.
Of course, manufacturer websites are also great sources of information.
Configuration settings with full-color pictures, FAQ lists, troubleshooting
tips, and even discussion forums are all available at your fingertips. Bookmark
your vendor websites in your browser for easy access. If you are under time
pressure to get something fixed, the last thing you want to do is waste
time searching for your vendor’s website.
Here are some good websites to have bookmarked for troubleshooting help:
http://support.microsoft.com/
http://www.microsoft.com/technet/
http://support.novell.com/
https://www.redhat.com/apps/support/
http://www.sun.com/service/support/
http://www-1.ibm.com/support/search/
The majority of the time, you will find an answer on one of these help
sites. Product upgrades, patches, and hot fixes are also at these locations on
the vendor websites.
When using a log book, it’s important to write down good technical informa-
tion. However, never write down passwords!
No matter if you have the best documentation in the world, there will be
situations where you cannot seem to find an answer to the problem. In situ-
ations like these, call someone for help. In the cases where you are in over
your head, making guesses to fix the server could only compound the prob-
lem. If the server is a production server, you could cost the company more
money by trying to guess and fix the situation than you would by calling
someone and asking for assistance. Put your ego aside and make the call. It’s
better to be safe than sorry when dealing with an unknown situation.
Remote Troubleshooting
Y ou’re not always going to be able to get to the machine that is not
functioning properly. This usually means you will have to talk someone
through a resolution over the phone. Most of the time, the person you are
helping will not have nearly the computer experience that you have. Patience
is a key. Other times, you will be able to remotely control the machine, and
fix it that way. This section looks at remote troubleshooting of servers, and
also looks at how to troubleshoot a wake-on-LAN.
course, all of the products mentioned suppose that your remote machine is
running. But what if it’s not?
If you cannot get the remote machine to work, you will have to trouble-
shoot over the phone. Ideally, you will have a technician at the site who is
computer savvy. More often than not, you are working with someone who
has little to no computer experience. They are frustrated because the machine
does not work, and the frustration is compounded because they don’t feel
that they should be the one working on the problem. When troubleshooting
over the phone, remember to keep your patience, and take it slow. Explain
what you are trying to do, and allow that person to see that you need and
value their help. Between the two of you, hopefully the problem can be fixed.
Wake-on-LAN
Wake-on-LAN (WOL) is a wonderful technology that allows an adminis-
trator to boot a machine at a remote location. After the machine is working,
the administrator can perform maintenance tasks, such as backups and virus
scans, during off hours. This decreases interruptions faced by users during
the day. The WOL technology was originally developed by AMD, and
termed “magic packet.” The network adapter maintains a very low power
state even when the computer is powered off. The NIC then looks for special
packets on the network indicating it should wake up the machine.
Windows products, NetWare, Unix, and Linux all support wake-on-LAN
technology. Your motherboard must also support it, and have a cable run-
ning from the NIC to the motherboard’s WOL connector. The motherboard
BIOS must also support WOL. However, if your motherboard has a WOL
connector, it’s likely that the BIOS supports it as well. Figure 17.7 shows
what a typical wake-on-LAN configuration might look like.
WOL Connector
Ethernet Card Ethernet
Motherboard
Power to the network adapter comes from one of two sources, depending
on how your machine is configured. Power can come from the PCI bus where
the network card is plugged in, or from an auxiliary power cable coming
from the power supply. It all depends on how your machine is configured. In
either case, the network card has power even when the rest of the machine
is off. The only way to turn the network card off is to unplug the power cable
from the back of the machine.
To make wake-on-LAN technology work, you will need management
software as well. Microsoft’s Systems Management Server (SMS) 2, Novell’s
Z.E.N.works, and IBM’s NetFinity are all examples of WOL management
software.
Diagnostic Tools
E ach operating system comes with its own set of diagnostic tools
for your use. Although many of these tools have been covered already in this
chapter, this section will summarize some of the more important ones. Also,
the troubleshooting process frequently involves rebooting the server. This
section will also cover the steps required to reboot your machine if you
need to.
Windows NT/2000
Although Windows NT and Windows 2000 are very similar in structure,
they do have different names for utilities that perform the same tasks.
One of the most important tools for troubleshooting in Windows NT is
Event Viewer (see the “Event Viewer” section above). It contains three logs:
System, Application, and Security. Errors reported by the system will appear
in the System log. To troubleshoot users, use User Manager for Domains.
Servers can be troubleshot through the Server Manager utility. Use Win-
dows Explorer to deal with Security issues.
Windows 2000 also contains an Event Viewer. It has the same function-
ality as its Windows NT counterpart, except that it has additional logs avail-
able. In a domain, you have Active Directory Users and Computers, Active
Directory Sites and Services, and Active Directory Domains and Trusts avail-
able for a variety of management responsibilities. Also, you can right-click
NetWare
NetWare provides built-in tools for administration and troubleshooting
as well. In NetWare 4, NWAdmin is the ultimate management and trouble-
shooting tool. It has an easy to use interface, and allows you to control
virtually every aspect of the server. NetWare 5 replaced NWAdmin with
ConsoleOne. The Monitor utility on NetWare servers is also good for look-
ing at configurations to see if there is a problem. NetWare also uses extensive
logging, and the logs can be viewed with any text editor. For repairing the
NDS tree, there is dsrepair, dstrace, and dsdiag.
To reboot your NetWare server, type down from the server prompt. When
returned to a command prompt, type restart server, and the server will
reboot.
Switch Function
-k displays the warning, but does not shut down the server
The linuxconf and coastool utilities also have their own shutdown panels.
If you cannot find a troubleshooting tool that does what you need it to,
remember that there are a large number of third-party vendors that produce
troubleshooting tools for specific operating systems. Go to your favorite
search engine and use your operating system and the word troubleshooting
as keywords. You are guaranteed to find some tools.
Troubleshooting Checklist
When troubleshooting computers, you want to make sure your trou-
bleshooting path takes a logical flow. After all, computers are logical beasts.
Here are some steps to follow when troubleshooting a problem:
1. Determine the problem’s priority.
The first thing to do is determine how important the problem is. Most
companies have a grading scale of the problem’s importance. The higher it
is on the priority scale, the quicker you need to fix it. If it’s a choice between
fixing the corporate e-mail server, or fixing a user’s soundcard (after all, they
can’t listen to MP3s now), I would go with the e-mail server.
Find documentation, and gather everything you can about the problem at
hand. Sometimes this may be just you and a screwdriver. Other times you
will want to bring along technical documentation. Find out what you need,
and have it with you.
It’s quite difficult to solve a problem if you’re not sure exactly what’s
wrong. In the case of failed hardware, isolate the problem to the specific
component. Try another component if you can, to see if it works. Once you
have located the exact part that’s broken, it’s much easier to fix. If you are
dealing with a software error, see if shutting down the application and
reopening it helps. If not, you may want to reboot the machine in question.
Rebooting a machine fixes many software problems. I always tell my clients,
“Reboot. And if it goes away, it’s not a problem.” Of course, if the problem
persists, you may need to reinstall the software.
Once you have figured out exactly what the problem is, fix it. I know this
sounds elementary, but some people forget this step.
And last but certainly not least, document what you did. Make sure your
documentation finds its way into the server log books. That way, the next
technician that works on the machine knows what you did. Good documen-
tation serves a few functions. One, it saves everyone time in the long run.
Two, if there are more problems later, you have covered your bases.
The checklist above is just an example of what you can do when working
on machines. You may modify it slightly to suit your own style. However, I
do not recommend skipping any of the listed steps.
game. Look through the answers, and eliminate ones you know are wrong.
If you only have one left, then that must be the right answer.
Knowing what menus, tools, and utilities you have available helps. If you
are presented with an answer that uses a tool that you have never heard of,
chances are it doesn’t exist. It may sound like the perfect tool for the job. It
may also be a really good wrong answer, because it’s not a real tool. Know
your product, and know what you can use.
Most of all, relax. This is only a test that measures how much you know
about various server products. It’s not a measure of your self worth. If you
are too tense, you can vapor lock and your brain can freeze up. It’s hard to
answer questions when you can’t think because you are too nervous. Relax,
think of a happy place, and tackle the question in front of you.
Summary
In this chapter, we looked at some troubleshooting fundamentals for
network servers.
First, we looked at some common problems, or issues, that servers may
have. Bottlenecks are very common. So common, in fact, that you will prob-
ably never get totally rid of them. Your goal is to minimize them as much as
you can. Failed configuration changes can be problematic. They need to be
reversed quickly. Having the wrong hardware can keep your server from
running, and it’s important to know how to replace failed hardware with the
correct part.
Viruses and file corruption can hamper productivity. Make sure to back
up files regularly, so if files are corrupted they can be retrieved. Run virus-
scanning programs on all of your servers, and all of your clients as well if you
can. Also, hardware and software components can sometimes cause conflicts
with each other, or not work at all with another component.
Next, we looked at log files. Almost every operating system generates log
files of some sort. They provide good information to help you in trouble-
shooting server problems. Log files can also warn you of potential problems
so you can fix them before they happen.
Documentation can save you a lot of headaches and time when fixing
servers. Have a central location where all relevant server documentation is
stored. This includes a log of all maintenance and configuration changes
performed on the server. Not keeping track of this sort of information can
cause ugly problems.
Next, we explored remote troubleshooting. Some operating systems have
tools built in for this purpose. Not all remote troubleshooting tools are
created equal. If possible, it’s nice to be able to take remote control of the
machine you are troubleshooting. Wake-on-LAN network cards are also
handy for remote troubleshooting, and remote maintenance and administra-
tion of client machines.
There are quite a few common diagnostic tools that you will use. The
tools vary depending on the operating system. Generally, the ones that come
with Windows-based servers are easy to use with menu-driven options. In
the Unix and Linux worlds, you will often face command-line tools, and you
will need to know the proper syntax to make them work. Check your doc-
umentation for details on their usage.
Last, we looked at a troubleshooting checklist, and discussed real-life
troubleshooting versus test troubleshooting. Real-world experience is gener-
ally nothing but a positive thing when testing. Just remember that your job
on the test is to pick the best answer out of the choices you are given.
Exam Essentials
Understand what a bottleneck is. Bottlenecks are limiting system
resources. Because of a bottleneck, the system runs slower than it proba-
bly should.
Know how to eliminate bottlenecks. You will probably never eliminate
all bottlenecks in your server. But if you can eliminate as many of them as
possible, your server will run well and clients will be happy. Sometimes
tweaking a server configuration can help. However, the best way to reduce
bottlenecks is by adding faster (or bigger) hardware to the server.
Know how to undo a failed configuration change. This depends on
your operating system. As an example, in Windows 2000 you can boot
into Safe Mode, and undo the change you made. Or if you installed an
application that is causing problems, remove it with Add/Remove Pro-
grams in Control Panel. To remove an application on a Linux server, use
the rpm utility.
Know how to identify and replace failed hardware. The failed piece of
hardware is the one that’s not working. All joking aside, isolate the prob-
lem by trying different hardware and seeing if it works. Alternately, you
can try the suspect piece of hardware in another functional machine.
Actual step-by-step details on replacing hardware vary per device. How-
ever, reading the hardware manual or looking at the vendor’s website can
give you good instructions.
Understand how to protect against viruses. A good antivirus program
is your best protection. Know that if you are on the Internet though, you
can never be 100 percent certain that you will not contract a virus.
Know how to deal with corrupt files. Corrupt files need to be restored
from the most recent valid tape backup.
Know what log files are good for. Log files won’t magically fix any-
thing by themselves, but they will give you clues as to what the problem
is. If you receive information in a log file that does not make a lot of sense
(like the Event IDs in Microsoft’s Event Viewer), check the vendor’s
online documentation for more details on how to proceed.
Understand the importance of documentation. Vendors don’t publish
manuals with their products because they think the books make the pack-
age lighter and reduce shipping charges. Manuals are there to assist you
in the installation, configuration, and troubleshooting of their product. If
the manual does not have the information you are looking for, check the
vendor’s website for the latest news.
Know what wake-on-LAN is. Wake-on-LAN is a technology that
allows administrators to boot remote computers from their network
cards. It’s useful for remote troubleshooting and administration.
Know how to troubleshoot for the exam. If you know what tools you
have available, and what each of the tools does, you will be in good shape.
Remember, your job on the test is to pick the best answer available.
Key Terms
Before you take the exam, be certain you are familiar with the follow-
ing terms:
baseline NetFinity
bottleneck Systems Management Server
(SMS)
field replaceable units wake-on-LAN (WOL)
(FRUs)
hardware compatibility list Z.E.N.works
(HCL)
Review Questions
1. You are the server administrator for your company. The company has
three servers, one of which holds the users’ home directories. The net-
work is connected to the Internet through a Cisco router. Recently,
files have started to disappear off of the servers. It appears to be ran-
dom and intermittent. You perform a security audit, and there is not
one user account responsible for the deletions. What is the most likely
cause of the problem?
A. The server hard drives are getting old and need replacement.
2. You are the network administrator for an insurance firm that has five
offices throughout the state. Users complain that when you update
their machines during the day, it slows their productivity greatly. They
cannot service their customers as well, don’t make as many sales, and
their supervisor is not pleased. What technology could you implement
to reduce the interruptions associated with updating the user worksta-
tions throughout the company?
A. PC Anywhere
B. Wake-on-LAN
C. Software Automater
D. Systems Management Automater
3. You are the NetWare administrator for your network. You have
four NetWare servers, all in the same NDS tree. The servers are run-
ning NetWare 5.1. Recently, one of the servers performed an abend,
and had to be brought back up manually. You want to know what
caused the problem. Where should you look?
4. You are running four Windows 2000 Server computers on your net-
work. You have configured a domain called mycompany.local. Two
of your servers are domain controllers, and the other two are member
servers in the domain. Your domain controllers also provide DNS ser-
vices for your network. You believe that one of your processors in one
of your domain controllers is being overworked. What utility should
you use to verify this?
A. Network Monitor
B. System Monitor
C. Monitor
5. Your network has two Linux servers. You want to reboot one of the
servers. You want to warn the users, have the server wait ten minutes,
and then reboot automatically. What command should you execute?
A. shutdown –r +t10 “Save your work now, the server is rebooting!”
6. You are the NetWare administrator for your network. You have
four NetWare servers, all in the same NDS tree. The servers are run-
ning NetWare 5.1. You suspect that someone is trying to hack into
one of your servers by using brute force. Where should you check to
see if this is the case?
7. You are the hardware administrator for your company’s seven servers.
Recently, you were instructed to purchase new memory for one of the
machines. You purchased the required memory, and have installed it
in the server. However, upon rebooting the server, the new memory is
not recognized. You take it back to the store, where they test it and
determine the memory to be functional. Their store policy is to not
accept refunds or exchanges for open memory. What is the most likely
cause of why the RAM is not recognized?
A. The RAM is nonfunctional.
C. The new RAM chip has more memory than your current RAM
chips, and you cannot mix different sizes in one machine.
D. The RAM you purchased is incompatible with the current RAM
installed in the server.
B. Network Monitor
B. rpm –r Finance
10. You are the network administrator for your company. You have three
NetWare 5.1 servers, and fifty client machines. You know that Net-
Ware is self-tuning when it comes to performance, but you want to
know how busy your processors are in the server. What utility will
allow you to check this?
A. Performance Monitor
B. System Monitor
C. Monitor
11. You are the hardware administrator for your company. One of your
servers has just had a problem, and you go to investigate. One of the
other administrators says that a hard drive crashed, and needs to be
replaced immediately. You notice that the case has not been removed
from the server. How should you proceed?
A. Get a new hard drive and replace the failed drive immediately.
B. Test the machine to make sure it’s the hard drive that failed.
C. Replace the defective hard drive immediately and document your
solution in the server log book.
D. Go back to getting the soundcard to work on your personal work-
station.
12. You are the NetWare administrator for your network. You have
four NetWare servers, all in the same NDS tree. The servers are run-
ning NetWare 5.1. You suspect that one of your hard drives may be
experiencing errors. Where should you check to see if this is the case?
A. The abend.log file in sys:system
13. You are the hardware administrator for your company’s five servers.
Recently, you were instructed to purchase new memory for one of the
machines. You purchased the required memory, and have installed it
in the server. However, upon rebooting the server, the new memory is
not recognized. You place the RAM into another server to test it, and
it works properly. What is the most likely cause of why the RAM is
not recognized in the intended server?
A. The server’s motherboard already has the maximum amount of
RAM that it will support installed.
B. You must install new RAM chips in pairs, not individually.
D. The system BIOS does not recognize the new memory chip.
14. You are the administrator of a Windows 2000 Server. You recently
installed a new third-party name-resolution service on your server.
Now, when you reboot the server, you receive many error messages,
and then the server hangs. You need to make the server operational as
quickly as possible. How should you proceed?
A. Reinstall the server from the Windows 2000 Server CD.
C. Boot the server into the Recovery Console, and use ntdsutil to
restore the server to its previous state.
D. Boot the server into Safe Mode, and remove the new service.
Reboot the server.
15. You are the network administrator for your company. While running
Performance Monitor on one of your Windows NT servers, you notice
that your processor time is constantly between 90 and 100 percent.
What should you do, if anything, to rectify the situation? (Choose all
that apply.)
A. Install a second processor in the server.
16. You are the NetWare administrator for your network. You have
four NetWare servers, all in the same NDS tree. The servers are run-
ning NetWare 5.1. One of the servers was powered off last night, and
you are not sure why. When you attempt to boot the server now, it
seems to proceed normally, and then hangs up. You attempt to reboot
again, but it appears to have the same problem. What file do you most
want to look at to see what the problem is?
A. The abend.log file in sys:system
17. One of your network servers has recently experienced corrupted files.
Three users insist that they lost important project files that were stored
on the server. How should you proceed in an attempt to recover the
missing files?
A. Have each user use the rollback command from the command
prompt to restore the files.
B. Have each user open the application that the files used. The appli-
cation will have a cached copy of the missing files. Save those files,
and have the users use them.
C. Restore the files from the most recent tape backup of the server.
D. Scold the users for not having a local copy of these important files.
Remind them about the importance of backing up critical files.
18. You are the network administrator for your company’s Microsoft
Windows NT network. You have four servers, and have a domain.
After performing an upgrade on the server that functions as your
DHCP server, you reboot the machine. You receive an error message
saying that the DHCP Service has failed to start. Where should you
check to get more information on this error?
A. Event Viewer, System log
E. DHCP Manager
19. You are a network administrator for your company. You have twelve
servers, including one Microsoft Windows 2000 Server functioning as
your dial-in server. Periodically, users complain that they are discon-
nected from the RRAS server without any warning. What source
should you check to see if you can find out what the problem is?
A. The ras.log file
B. The ppp.log file
20. You are the server administrator for your company. One of your users
has noticed that every time they perform a specific operation within
one of your mission critical applications, it crashes unexpectedly. You
verify that this happens. You delete and reinstall the application, and
the crash continues to persist. What should you do to rectify the
situation?
A. Check with your hardware vendor’s website to see if there are any
known fixes or patches.
B. Check with your software vendor’s website to see if there are any
known fixes or patches.
C. Delete and reinstall the application again, and see if the crash
goes away.
D. Uninstall the application.
16. B. The boot$log.err file records all events during the boot process.
Granted, in this case, if you can’t boot the server up, it may be hard to
get to that file. However, it’s the one you would want to look at if you
could get to it.
17. C. It’s the administrator’s responsibility to back up file servers. If the
users are missing files, then the administrator can restore them from a
recent tape backup. Generally, you should not leave it to the users to
back up their own files. The rollback command, in relation to restor-
ing files, does not exist.
18. A. Error messages that pop up when you boot will log more informa-
tion in the System log of Event Viewer. Go there first to find out more
information. If you are still stumped as to how to fix the problem, you
can use the provided Event ID and search Microsoft’s support site for
more information.
19. C. The device.log file will log all activity that goes through the
serial ports of your RRAS server. If users are being intermittently dis-
connected, this file may be able to give you an indication as to why.
20. B. This is a good time to check with the software vendor and see if
they know about the problem that you’re having. Their website may
have a fix or patch that alleviates the problem. If not, you will cer-
tainly want to call their tech support and see if they can fix it. If they
are unwilling or unable to, search for an alternate product.
10Base2 An Ethernet standard which uses 802.11 The family of 802.11 IEEE standards
thinnet coaxial cable baseband communication includes several varations on high-speed wire-
at 10 megabits per second (Mbps) over a max- less networking, including 802.11a and
imum distance of 200 meters. 802.11b.
10Base5 This Ethernet standard uses 80286 Also called the 286. A 16-bit micro-
thicknet coaxial wire with 10 megabits per processor from Intel, first released in February
second (Mbps) transfer speed at a maximum 1982 and used by IBM in the IBM PC/AT com-
distance of 500 meters. puter. Since then it has been used in many other
IBM-compatible computers. The 80286 uses a
10BaseT An Ethernet standard that uses 16-bit data word and a 16-bit data bus, and it
twisted-pair cabling at a transmission speed uses 24 bits to address memory.
of 10 megabits per second (Mbps) and a max-
imum distance of 100 meters. 80287 Also called the 287. A floating-point
processor from Intel, designed for use with the
386 enhanced mode In Microsoft Win- 80286 CPU chip. When supported by applica-
dows, the most advanced and complex of tion programs, a floating-point processor can
the different operating modes, 386 enhanced speed up floating-point and transcendental
mode lets Windows access the protected mode math operations by 10 to 50 times. The 80287
of the 80386 (or higher) processor for extended conforms to the IEEE 754-1985 standard for
memory management and multitasking for binary floating-point operations, and it is avail-
both Windows and non-Windows application able in clock speeds of 6, 8, 10, and 12MHz.
programs.
80386DX Also called the 80386, the 386DX,
802.3 An IEEE standard that defines a bus and the 386. A full 32-bit microprocessor intro-
topology network that uses a 50-ohm coaxial duced by Intel in October 1985 and used in
baseband cable and carries transmissions at many IBM and IBM-compatible computers.
10Mbps. This standard groups data bits into Available in 16-, 20-, 25-, and 33MHz ver-
frames and uses the Carrier Sense Multiple sions, the 80386 has a 32-bit data word, can
Access with Collision Detection (CSMA/CD) transfer information 32 bits at a time over the
cable access method to put data on the data bus, and can use 32 bits in addressing
cable. memory. The 80386 is equivalent to about
802.5 The IEEE 802.5 standard specifies a 275,000 transistors, and can perform 6 million
physical star, logical ring topology that uses a instructions per second. The floating-point pro-
token-passing technology to put the data on the cessor for the 80386DX is the 80387.
cable. IBM developed this technology for their 80386SX Also called the 386SX. A lower-
mainframe and minicomputer networks. IBM’s cost alternative to the 80386DX micropro-
name for it was Token Ring. The name stuck, cessor, 80386SX was introduced by Intel in
and any network using this type of technology 1988. Available in 16-, 20-, 25-, and 33MHz
is called a Token Ring network.
versions, the 80386SX is an 80386DX with internally but at 25MHz while communicating
a 16-bit data bus. This design allows systems with other system components, including
to be configured using cheaper 16-bit compo- memory and the other chips on the mother-
nents, leading to a lower overall cost. The board, thus maintaining its overall system com-
floating-point processor for the 80386SX is patibility. 50- and 66MHz versions of the DX2
the 80387SX. are available. The 486DX2 contains 1.2 million
transistors and is capable of 40 million instruc-
80387 Also called the 387. A floating-point tions per second.
processor from Intel, 80387 was designed for
use with the 80386 CPU chip. When supported 80486SX Also called the 486SX. A 32-bit
by application programs, a floating-point pro- microprocessor introduced by Intel in April
cessor can speed up floating-point and tran- 1991. The 80486SX can be described as an
scendental math operations by 10 to 50 times. 80486DX with the floating-point processor
The 80387 conforms to the IEEE 754-1985 circuitry disabled. Available in 16-, 20-, and
standard for binary floating-point operations 25MHz versions, the 80486SX contains the
and is available in speeds of 16, 20, 25, and equivalent of 1.185 million transistors and can
33MHz. execute 16.5 million instructions per second.
80486DX Also called the 486 or i486. 80487 Also called the 487. A floating-point
80486DX is a 32-bit microprocessor intro- processor from Intel, designed for use with the
duced by Intel in April 1989. The 80486 repre- 80486SX CPU chip. When supported by appli-
sents the continuing evolution of the 80386 cation programs, a floating-point processor can
family of microprocessors and adds several speed up floating-point and transcendental
notable features, including on-board cache, math operations by 10 to 50 times. The 80487
built-in floating-point processor and memory is essentially a 20MHz 80486 with the floating-
management unit, as well as certain advanced point circuitry still enabled. When an 80487 is
provisions for multiprocessing. Available in added into the coprocessor socket of a mother-
25-, 33-, and 50MHz versions, the 80486 is board running the 80486SX, it effectively
equivalent to 1.25 million transistors and can becomes the main processor, shutting down
perform 20 million instructions per second. the 80486SX and taking over all operations.
The 80487 conforms to the IEEE 754-1985
80486DX2 Also known as the 486DX2. A standard for binary floating-point operations.
32-bit microprocessor introduced by Intel in
1992. It is functionally identical to and 100 8086 This 16-bit microprocessor from
percent compatible with the 80486DX, but it Intel was first released in June 1978, and it
has one major difference: the DX2 chip adds is available in speeds of 4.77MHz, 8MHz,
what Intel calls speed-doubling technology— and 10MHz. The 8086 was used in a variety of
meaning that it runs twice as fast internally as early IBM-compatible computers as well as the
it does with components external to the chip. IBM PS/2 Model 25 and Model 30. The 8086
For example, the DX2-50 operates at 50MHz uses a 16-bit data word and a 16-bit data bus.
The 8086 contains the equivalent of 29,000 Active Directory The Active Directory, a
transistors and can execute 0.33 million feature of Windows 2000, stores information
instructions per second. about users, computers, and network resources.
The Active Directory is stored in databases on
8088 This 16-bit microprocessor from Intel special Windows 2000 Server computers called
was released in June 1978, and it was used in Domain Controllers.
the first IBM PC, as well as the IBM PC/XT,
Portable PC, PCjr, and a large number of IBM- Active hubs A type of hub that uses elec-
compatible computers. The 8088 uses a 16-bit tronics to amplify and clean up the signal
data word, but transfers information along before it is broadcast to the other ports.
an 8-bit data bus. Available in speeds of
4.77MHz and 8MHz, the 8088 is approxi- active matrix A type of liquid crystal display
mately equivalent to 29,000 transistors and that has a transistor for each pixel in the screen.
can execute 0.33 million instructions per active-matrix screen An LCD display mech-
second. anism that uses an individual transistor to con-
8-bit bus The type of expansion bus that was trol every pixel on the screen. Active-matrix
used with the original IBM PC. The bus can screens are characterized by high contrast,
transmit 8 bits at a time. a wide viewing angle, vivid colors, and fast
screen refresh rates, and they do not show the
Accelerated Graphics Port (AGP) bus A streaking or shadowing that is common with
type of 32-bit expansion bus that runs at cheaper LCD technology.
66MHz. It is a very high-speed bus that is
used primarily for video expansion cards actuator arm The device inside a hard disk
and can transfer data at a maximum drive that moves the read/write heads as a
throughput 508.6MBps. group in the fixed disk.
access control list A method of controlling adapter fault tolerance This process
network resources by allowing or denying involves installing and configuring more than
access to users. This list of rules is created and one network card (adapter) in a computer to
maintained on the server. provide continuous operation should a fault
arise in a network card.
access time The period of time that elapses
between a request for information from disk address bus The internal processor bus used
or memory and the information arriving at for accessing memory. The width of this bus
the requesting device. Memory access time determines how much physical memory a pro-
refers to the time it takes to transfer a char- cessor can access.
acter from memory to or from the processor, address The precise location in memory or
while disk access time refers to the time it on disk where a piece of information is stored.
takes to place the read/write heads over the Every byte in memory and every sector on a
requested data. disk have their own unique addresses.
analog Describes any device that represents application servers Also called appservers,
changing values by a continuously variable these servers share out applications (software
physical property such as voltage in a circuit, programs) to users over a network.
fluid pressure, liquid level, and so on. An archive bit A file attribute that is used to
analog device can handle an infinite number of determine whether a file has been updated since
values within its range. the last backup. A bit is set in the file directory
antistatic bag A bag designed to keep static to indicate the archive status.
charges from building up on the outside of a archiving The process of compressing
computer component during shipping. The bag numerous files into one compact file.
will collect some of the charges, but does not
drain them away as ESD mats do. ARP ARP stands for address resolution pro-
tocol. When a host on an IP based network
antistatic wrist strap (ESD strap) A spe- wants to send data to another host, the host
cially constructed strap worn to guard against name must be mapped to an IP address and the
the damages of ESD. One end of the strap is IP address mapped to a MAC address.
attached to an earth ground and the other is
wrapped around the technician’s wrist. array A grouping of hard disks that are the
same size, speed, and type. Often used within a
antivirus program An application program RAID conifguration.
you run to detect or eliminate a computer virus
ASCII Acronym for American Standard angles to the expansion cards, thus allowing the
Code for Information Interchange. A standard fan from the power supply to assist in cooling
coding scheme that assigns numeric values to the CPU.
letters, numbers, punctuation marks, and con-
trol characters, to achieve compatibility among authentication The process of determining
different computers and peripherals. the identity and legitimacy of a user, node,
or process. Username and password are com-
assessment An evaluation performed to monly used to provide authentication.
measure the current status of either software
or hardware operations. Assessments are nor- AUTOEXEC.BAT A contraction of AUTO-
mally performed objectively with a set standard matically EXECuted BATch. AUTOEXEC.BAT
to measure performance. is a special DOS batch file, located in the root
directory of a startup disk, and it runs automat-
asynchronous Describes a type of communi- ically every time the computer is started or
cation that adds special signaling bits to each restarted.
end of the data. The bit at the beginning of the
information signals the start of the data and is auto-ranging multimeters A multimeter
known as the start bit. The next few bits are the that automatically sets its upper and lower
actual data that needs to be sent. Those bits are ranges depending on the input signal. These
known as the data bits. Stop bits indicate that multimeters are more difficult to damage by
the data is finished. Asynchronous communica- choosing the wrong range setting. See also
tions have no timing signal. multimeter.
AT bus Another name for the ISA bus. See Autorun On a CD-ROM, the Autorun
also ISA. option allows the CD to automatically start an
installation program or a menu screen when it
ATA version 2 (ATA-2) The second version is inserted into the CD-ROM drive.
of the original IDE (ATA) specification that
allowed drive sizes of several gigabtyes and availability Availability relates to a resource,
overcame the limitation of 528MB. It is also such as a server, which is currently accessible.
sometimes generically known as Enhanced IDE By including redundancy within your server
(EIDE). configuration, you are increasing the avail-
ability of the server.
Attached Resource Computer Network
(ARCNet) A network technology that uses a baby AT A type of motherboard form factor
physical star, logical ring, and token passing where the motherboard is smaller than the orig-
access method. It is typically wired with coaxial inal AT form factor.
cable. backup A duplicate copy made to be able to
ATX A motherboard form factor in which recover from an accidental loss of data.
the processor and memory slots are at right
Backup The Backup utility for Windows has batch file File with a .bat extension that
evolved from a command-line program to a contains other DOS commands. By typing the
GUI program. name of the batch file and pressing Enter, DOS
will process all of the batch file commands, one
backup drive A device used to create a at a time, without need for any additional user
backup of computer data for safe storage. input.
Backup devices normally use removable
media that hold the backed-up data. baud rate In communications equipment, a
measurement of the number of state changes
backup set A related collection of backup (from 0 to 1 or vice versa) per second on an
media. asynchronous communications channel.
backup software Programs and utilities Berg connector A type of connector most
used to back up data to a tape drive or other commonly used in PC floppy drive power
device. Third-party backup programs offer cables; it has four conductors arranged in
more features than the standard utilities a row.
included with operating systems.
beta Beta code is software that has reached
backup source The device or data being the stage where is usable and generally stable,
backed up. but it is not completely finished. Beta code is
bandwidth In communications, the differ- often released to the public for testing on an “as
ence between the highest and the lowest fre- is” basis, and user comments are then used to
quencies available for transmission in any given finish the release version of the product.
range. In networking, the transmission capacity bias voltage The high-voltage charge
of a computer or a communications channel applied to the developing roller inside an EP
stated in megabits or megabytes per second; the cartridge.
higher the number, the faster the data transmis-
sion takes place. Bindery Novell’s database containing all the
information about users, workstations, servers,
baseline A measure of performance during and other objects recognized by the server. The
what is considered a normal workload. Bindery was replaced in version 4 by NetWare
BASH Bourne Again SHell is GNU’s com- Directory Services (NDS).
mand interpreter for Unix. See Bourne also. binary Any scheme that uses two different
basis weight A measurement of the “heavi- states, components, conditions, or conclusions.
ness” of paper. The number is the weight, in In mathematics, the binary (base-2) numbering
pounds, of 500 17"× 22" sheets of that type system uses combinations of the digits 0 and 1
of paper. to represent all values.
biometrics The integration of computer first turn on or reset your computer. A set of
authentication and unique human characteris- instructions contained in ROM begin exe-
tics. Biometrics will often include fingerprints, cuting, first running a series of power on self-
retinal scanning, voice recognition, or identifi- tests (POSTs) to check that devices, such as
cation of other unique human characteristics. hard disks, are in working order, then locating
and loading the operating system, and finally
BIOS (basic input/output system) The passing control of the computer over to that
ROM-based software on a motherboard that operating system.
acts as a kind of interpreter between an oper-
ating system and a computer’s hardware. bootable disk Any disk capable of loading
and starting the operating system, although
BIOS CMOS setup program Program that most often used when referring to a floppy
modifies BIOS settings in the CMOS memory. disk. In these days of larger and larger oper-
This program is available at system startup ating systems, it is less common to boot from
time by pressing a key combination such as a floppy disk. In some cases, all of the files
Alt+F1 or Ctrl+F2. needed to start the operating system will not
BIOS shadow A copy of the BIOS in fit on a single floppy disk, which makes it
memory. impossible to boot from a floppy.
bit Contraction of BInary digiT. A bit is the bottleneck Bottlenecks are locations
basic unit of information in the binary num- where the performance is hindered due to poor
bering system, representing either 0 (for off) or performance. This can include processor speed,
1 (for on). Bits can be grouped together to make amount of RAM, and hard drive speeds.
up larger storage units, the most common being Bourne An early command interpreter and
the 8-bit byte. A byte can represent all kinds script language for Unix created by S.R. Bourne
of information, including the letters of the of Bell Laboratories.
alphabet, the numbers 0 through 9, and
common punctuation symbols. BPS (bits per second) A measurement of
how much data (how many bits) is being trans-
bit-mapped font A character in a specific mitted in one second. Typically used to describe
typestyle and size, defined by a pattern of dots. the speed of asynchronous communications
The computer must keep a complete set of (modems).
bitmaps for every typestyle you use on your
system, and these bitmaps can consume large bridge This type of connectivity device oper-
amounts of disk space. ates in the Data Link layer of the OSI model. It
is used to join similar topologies (Ethernet to
boot The loading of an operating system Ethernet, Token Ring to Token Ring) and
into memory, usually from a hard disk, to divide traffic on network segments. This
although occasionally from a floppy disk. This device will pass information destined for one
is an automatic procedure begun when you
particular workstation to that segment, but it decimal portion of the version number; for
will not pass broadcast traffic. example, the revision level may advance from 2
to 2.01 or 2.1, rather than from 2 to 3.
broadcasting Sending a signal to all entities
that can listen to it. In networking, it refers to bus A set of pathways that allow information
sending a signal to all entities connected to that and signals to travel between components
network. inside or outside of a computer.
brouter In networking, a device that com- bus clock A chip on the motherboard that
bines the attributes of a bridge and a router. A produces a type of signal (called a clock signal)
brouter can route one or more specific proto- that indicates how fast the bus can transmit
cols, such as TCP/IP, and bridge all others. information.
brownout A short period of low voltage bus connector slot A slot made up of
often caused by an unusually heavy demand for several small copper channels that grab the
power. matching “fingers” of the expansion circuit
boards. The fingers connect to copper path-
browser A piece of software used to access ways on the motherboard.
the Internet. Common browsers are Netscape’s
Navigator and Microsoft’s Internet Explorer. bus mastering A technique that allows
certain advanced bus architectures to delegate
bubble-jet printer A type of sprayed ink control of data transfers between the Central
printer, this type uses an electric signal that Processing Unit (CPU) and associated periph-
energizes a heating element, causing ink to eral devices to an add-in board.
vaporize and get pushed out of the pinhole
and onto the paper. bus mouse A mouse connected to the com-
puter using an expansion board plugged into an
bug A logical or programming error in hard- expansion slot, instead of simply connected to a
ware or software that causes a malfunction of serial port as in the case of a serial mouse.
some sort. If the problem is in software, it can
be fixed by changes to the program. If the fault bus topology Type of physical topology that
is in hardware, new circuits must be designed consists of a single cable that runs to every
and constructed. Some bugs are fatal and cause workstation on the network. Each computer
the program to hang or cause data loss, others shares that same data and address path. As
are just annoying, and many are never even messages pass through the trunk, each work-
noticed. station checks to see if the message is addressed
for itself. This topology is very difficult to recon-
bug-fix A release of hardware or software figure, since reconfiguration requires you to
that corrects known bugs but does not contain disconnect and reconnect a portion of the net-
additional new features. Such releases are usu- work (thus bringing the whole network down).
ally designated only by an increase in the
byte Contraction of BinarY digiT Eight. A capacitive touch screen Type of display
group of 8 bits that, in computer storage terms, monitor that has two clear plastic coatings over
usually holds a single character, such as a the screen, separated by air. When the user
number, letter, or other symbol. presses the screen in a particular spot, the
coatings are pressed together and the controller
C One of several commonly used shells for registers a change in the total capacitance of
administering Unix. (See also Bourne and the two layers. The controller then determines
BASH.) (C is also a programming language.) where the screen was pressed by the capaci-
cable access methods Methods by which tance values and sends that information to the
stations on a network get permission to computer in the form of x,y coordinates.
transmit their data. capacitor An electrical component, normally
cache Pronounced cash. A special area of found in power supplies and timing circuits,
memory, managed by a cache controller, that used to store electrical charge.
improves performance by storing the contents card services Part of the software support
of frequently accessed memory locations and needed for PCMCIA (PC Card) hardware
their addresses. When the processor references devices in a portable computer, controlling the
a memory address, the cache checks to see if it use of system interrupts, memory, or power
holds that address. If it does, the information management. When an application wants to
is passed directly to the processor; if not, a access a PC Card, it always goes through the
normal memory access takes place instead. A card services software and never communicates
cache can speed up operations in a computer in directly with the underlying hardware.
which RAM access is slow compared with its
processor speed, because the cache memory is carpal tunnel syndrome A form of wrist
always faster than normal RAM. injury caused by holding the hands in an
awkward position for long periods of time.
cache memory Fast SRAM memory used to
store, or cache, frequently used instructions carriage motor Stepper motor used to move
and data. the print head back and forth on a dot-matrix
printer.
capacitive keyboard Keyboard designed
with two sheets of semi-conductive material cathode-ray tube See CRT.
separated by a thin sheet of Mylar inside the
keyboard. When a key is pressed, the plunger CCD (charge-coupled device) A device that
presses down and a paddle connected to allows light to be converted into electrical
the plunger presses the two sheets of semi- pulses.
conductive material together, changing the CCITT Acronym for Comité Consultatif Inter-
total capacitance of the two sheets. The nationale de Téléphonie et de Télégraphie. An
controller can tell by the capacitance value organization, based in Geneva, that develops
returned which key was pressed.
chip creep The slow self-loosening of chips clock doubling Technology that allows a
from their sockets on the system board as a chip to run at the bus’s rated speed externally,
result of the frequent heating and cooling of but still be able to run the processor’s internal
the board (which causes parts of the board— clock at twice the speed of the bus. This tech-
significantly, the chip connector slots—to alter- nology improves computer performance.
nately expand and shrink).
clock rate See clock speed.
chip puller A tool that is used on older
(pre-386) systems to remove the chips without clock signal Built-in metronome-like signal
damaging them. that indicates how fast the components can
operate.
clean install A process by which the current
application or operating system is completely clock speed Also known as clock rate. The
removed and a new copy is installed. All traces internal speed of a computer or processor, nor-
of the original version are completely mally expressed in MHz. The faster the clock
removed. speed, the faster the computer will perform a
specific operation, assuming the other compo-
cleaning step The step in the EP print pro- nents in the system, such as disk drives, can
cess where excess toner is scraped from the EP keep up with the increased speed.
drum with a rubber blade.
clock tripling A type of processor design
client A network entity that can request where the processor runs at one speed exter-
resources from the network or server. nally and at triple that speed internally.
client computers A computer that requests cluster The smallest unit of hard disk space
resources from a network. that DOS can allocate to a file, consisting of
one or more contiguous sectors. The number of
client-server Client-server architecture sectors contained in a cluster depends on the
describes computer programs specifically hard disk type and operating system.
designed to use the processing power of both
the server and the client machines in the comple- clustering Connecting two or more com-
tion of their tasks. Generally this means that the puters together in such a way that they behave
client makes an initial request to the server, like a single computer. Clustering is often used
and the server then does some initial processing to provide parallel processing, Load Balancing,
on the request. The result of that processing and fault tolerance.
is then returned to the client, or to another
machine for additional work to be done with it. CMIP (Common Management Information
Protocol) Designed as a replacement for
client software Software that allows a SNMP, but not yet widely adopted, CMIP is
device to request resources from a network. a network management protocol. CMIP
provides better security and better reporting of In Windows 2000, the computer name is
unusual network conditions. always the same as the machine’s host name,
while in Windows 9x the two can be different.
CMOS Acronym for Complementary Metal
Oxide Semiconductor. An area of nonvolatile conditioning step The step in the EP print
memory that contains settings that determine process where a uniform charge is applied
how a computer is configured. to the EP drum by the charging corona or
charging roller.
CMOS battery A battery used to power
CMOS memory so that the computer won’t conductor Any item that permits the flow of
lose its settings when powered down. electricity between two entities.
cold backup site A backup site that contains CONFIG.SYS In DOS and OS/2, a special
basic equipment for running a business in case text file containing settings that control the way
of disaster at the primary location. Cold backup that the operating system works. CONFIG.SYS
sites require hardware and software setup and must be located in the root directory of the
configuration before use. default boot disk, normally drive C, and is read
by the operating system only once as the system
command line Describes a computer inter- starts running. Some application programs and
face that uses basic prompts and requires the peripheral devices require you to include spe-
user to type in commands; also the line itself. cial statements in CONFIG.SYS, while other
COMMAND.COM Takes commands issued commands may specify the number of disk-
by the user through text strings or click actions read buffers or open files on your system,
and translates them back into calls that can be specify how the disk cache should be config-
understood by the lower layers of DOS. It is the ured, or load any special device drivers your
vital command interpreter for DOS. system may need.
system. CP/M was a command-line system that crosstalk Problem related to electromagnetic
was developed by Gary Kildall. fields when two wires carrying electrical signals
run parallel and one of the wires induces a
conventional memory The amount of signal in the second wire. If these wires are
memory accessible by DOS in PCs using carrying data, the extra, unintended signal can
an Intel processor operating in real mode, cause errors in the communication. Crosstalk
normally the first 640K. is especially a problem in unshielded parallel
cooperative hot backup site A cooperative cables that are longer than 10 feet.
hot backup site (see entry for hot backup site) is CRT Acronym for cathode-ray tube. A dis-
shared between two or more businesses in hope play device used in computer monitors and tele-
of reducing the total costs of ownership (TCO). vision sets. A CRT display consists of a glass
The backup site is designed to meet the needs of vacuum tube that contains one electron gun for
each company involved. a monochrome display, or three (red, green,
cooperative multitasking A form of multi- and blue) electron guns for a color display.
tasking in which all running applications must Electron beams from these guns sweep rapidly
work together to share system resources. across the inside of the screen from the upper-
left to the lower-right of the screen. The inside
corona roller Type of transfer corona of the screen is coated with thousands of phos-
assembly that uses a charged roller to apply phor dots that glow when they are struck by the
charge to the paper. electron beam. To stop the image from flick-
ering, the beams sweep at a rate of between 43
corona wire Type of transfer corona assembly. and 87 times per second, depending on the
Also, the wire in that assembly that is charged by phosphor persistence and the scanning mode
the high voltage supply. It is narrow in diameter used—interlaced or non-interlaced. This is
and located in a special notch under the EP print known as the refresh rate and is measured in
cartridge. Hz. The Video Electronics Standards Associa-
tion (VESA) recommends a vertical refresh rate
Counter Logs Monitor Lets you set a period
of 72Hz, noninterlaced, at a resolution of 800
of time during which the information gathered
by 600 pixels.
in the System Monitor will be recorded in a text
file on the hard drive. This allows you to main- cylinder A hard disk consists of two or more
tain a record of performance. This record can platters, each with two sides. Each side is fur-
be reviewed at a later date or used to compare ther divided into concentric circles known as
to data gathered at another time. tracks, and all the tracks at the same concentric
position on a disk are known collectively as a
CPU (Central Processing Unit) See Central
cylinder.
Processing Unit (CPU).
daily rotation Daily rotation is not consid-
CPU clock Type of clock signal that dictates
ered a suitable backup strategy. In a daily
how fast the CPU can run.
rotation, the same tape is used every day. There Data Link layer The second of seven layers of
is no offsite storage of media, and no opportu- the International Standards Organization’s
nity to restore data unless the problem is Open Systems Interconnection (ISO/OSI) model
discovered within a day. for computer-to-computer communications.
The Data Link layer validates the integrity of
daisy-chaining Pattern of cabling where the the flow of data from one node to another by
cables run from the first device to the second, synchronizing blocks of data and by controlling
second to the third, and so on. If the devices the flow of data.
have both an “in” and an “out,” the in of the
first device of each pair is connected to the out data set ready See DSR.
of the second device of each pair.
data terminal equipment See DTE.
daisy-wheel printer An impact printer that
uses a plastic or metal print mechanism with data terminal ready See DTR.
a different character on the end of each spoke data transfer rate The speed at which a disk
of the wheel. As the print mechanism rotates drive can transfer information from the drive to
to the correct letter, a small hammer strikes the the processor, usually measured in megabits or
character against the ribbon, transferring megabytes per second.
the image onto the paper.
daughterboard A printed circuit board that
DAT See digital audiotape (DAT). attaches to another board to provide additional
data bits In asynchronous transmissions, the functions.
bits that actually comprise the data; usually 7 DB-25 A 25-pin connector used to interface
or 8 data bits make up the data word. external devices with the computer. Commonly
data bus Bus used to send and receive data to used for a parallel port or original SCSI-1
the microprocessor. connector.
data compression Any method of encoding DB connector Any of several types of cable
data so that it occupies less space than in its connectors used for parallel or serial cables.
original form. The number following the letters DB (for data
bus) indicates the number of pins that the
data encoding scheme (DES) The method connector usually has.
used by a disk controller to store digital infor-
mation onto a hard disk or floppy disk. DES de facto Latin translation for by fact. Any
has remained unbroken despite years of use; it standard that is a standard because everyone
completely randomizes the information so is using it.
that it is impossible to determine the encryp- default gateway The router that all packets
tion key even if some of the original text is are sent to when the workstation doesn’t know
known.
where the destination station is, or when it denial of service attack (DoS) A type of
can’t find the destination station. network attack that prevents users, even legiti-
mate users, from accessing the network.
de jure Latin translation for by law. Any
standard that is a standard because a standards DES See data encoding scheme (DES).
body decided it should be so.
Desktop Contains the visible elements of
debouncing A keyboard feature that elimi- Windows and defines the limits of the graphic
nates unintended triggering of keystrokes. It environment.
works by having the keyboard controller con-
stantly scan the keyboard for keystrokes. Only Desktop Control Panel Windows utility
keystrokes that are pressed for more than two that is used to configure the system so it is more
scans are considered keystrokes. This prevents user friendly. This Control Panel contains the
spurious electronic signals from generating settings for the background color and pattern
input. as well as screen saver settings.
decimal The base-10 numbering system that developing roller The roller inside a toner
uses the familiar numbers 0–9. cartridge that presents a uniform line of toner
to help apply the toner to the image written on
dedicated server The server that is assigned the EP drum.
to perform a specific application or service.
developing step The step in the EP print
default gateway If a user needs to communi- process where the image written on the EP
cate by TCP/IP with a computer that is not on drum by the laser is developed, that is, it has
their subnet (the local network segment) the toner stuck to it.
computer needs to use a gateway to access this
remote network. The default gateway is simply device driver A small program that allows a
the path that is taken by all outgoing traffic computer to communicate with and control
unless another path is specified. a device.
blocks, thereby freeing up space in conven- DIN-n Circular type of connector used with
tional memory. computers. (The n represents the number of
connectors.)
diagnostic program A program that tests
computer hardware and peripherals for correct DIP (Dual Inline Package) A standard
operation. In the PC, some faults are easy to housing constructed of hard plastic commonly
find, and these are known as “hard faults”; the used to hold an integrated circuit. The circuit’s
diagnostic program will diagnose them cor- leads are connected to two parallel rows of pins
rectly every time. Others, such as memory designed to fit snugly into a socket; these pins
faults, can be difficult to find; these are called may also be soldered directly to a printed-
“soft faults” because they do not occur every circuit board. If you try to install or remove
time the memory location is tested, but only dual inline packages, be careful not to bend or
under very specific circumstances. damage their pins.
differential backup A type of backup that DIP switch A small switch used to select
backs up files that have changed since the last the operating mode of a device, mounted as
full backup. a Dual Inline Package. DIP switches can be
either sliding or rocker switches and are often
digital audiotape (DAT) A method of grouped together for convenience. They are
recording information in digital form on a used on printed circuit boards, dot-matrix
small audiotape cassette. Many gigabytes of printers, modems, and other peripherals.
information can be recorded on a cassette, and
so a DAT can be used as a backup medium. direct memory access See DMA (direct
Like all tape devices, however, DATs are rela- memory access).
tively slow.
directory Directories are used to organize
digital signal A signal that consists of dis- files on the hard drive. Another name for a
crete values. These values do not change over directory is a folder. Directories created inside
time; in effect, they change instantly from one or below others are called “subfolders” or
value to another. “subdirectories.”
digital signature A digital signature is used Direct Rambus A memory bus that transfers
to verify the identity of the sender and/or origin data at 800MHz over a 16-bit memory bus.
of the message. It is a unique value associated Direct Rambus memory models (often called
with a transaction and cannot be forged. RIMMs), like DDR SDRAM, can transfer data
on both the rising and falling edges of a clock
DIMM (Dual Inline Memory Module) cycle.
Memory module that is similar to a SIMM
(Single Inline Memory Module), except that Direct Rambus RAM A type of memory cre-
a DIMM is double-sided. There are memory ated by Rambus Inc. with transfer speeds of up
chips on both sides of the memory module. to 800MHz.
directory services Software that stores infor- or hard disks installed in the computer. A single
mation about objects on a network and makes disk controller may manage more than one
this information available to users and network hard disk; many disk controllers also manage
administrators. Windows 2000 uses AD, while floppy disks and compatible tape drives.
Novell 4 and newer versions use NDS.
disk drive A peripheral storage device that
direct-solder method A method of attaching reads and writes to magnetic or optical disks.
chips to the motherboard where the chip is sol- When more than one disk drive is installed on a
dered directly to the motherboard. computer, the operating system assigns each
drive a unique name—for example A and C in
disaster recovery The process involving DOS, Windows, and OS/2.
rebuilding or repairing computer systems/data
after a disaster has struck. disk duplexing In networking, a fault-
tolerant technique that writes the same infor-
disaster recovery plan A carefully created mation simultaneously onto two different hard
plan, which is regularly updated, that outlines disks. Disk duplexing is supported by most of
the steps to follow in dealing with a data the major network operating systems and is
disaster. It can include disaster prevention, designed to protect the system against a single
or fault tolerance planning. disk failure; it is not designed to protect against
disk cache An area of computer memory multiple disk failures and is no substitute for a
where data is temporarily stored on its way to well-planned series of disk backups.
or from a disk. A disk cache mediates between diskless workstation A networked com-
the application and the hard disk, and when an puter that does not have any local disk storage
application asks for information from the hard capability.
disk, the cache program first checks to see if
that data is already in the cache memory. If it is, disk mirroring In networking, a fault-
the disk cache program loads the information tolerant technique that writes the same infor-
from the cache memory rather than from the mation simultaneously onto two different hard
hard disk. If the information is not in memory, disks, using the same disk controller. In the
the cache program reads the data from the disk, event of one disk failing, information from the
copies it into the cache memory for future other can be used to continue operations. Disk
reference, and then passes the data to the mirroring is offered by most of the major net-
requesting application. work operating systems and is designed to pro-
tect the system against a single disk failure; it is
disk-caching program A program that reads not designed to protect against multiple disk
the most commonly accessed data from disk failures and is no substitute for a well-planned
and keeps it in memory for faster access. series of disk backups.
disk controller The electronic circuitry that disk operating system See DOS.
controls and manages the operation of floppy
distributed processing A computer system plan (as well as the collection itself). All rele-
in which processing is performed by several vant information is documented and kept in
separate computers linked by a communica- a safe place for future reference.
tions network. The term often refers to any
computer system supported by a network, but domain The security structure for Windows
more properly refers to a system in which each NT Server and Windows 2000 Active Direc-
computer is chosen to handle a specific work- tory; the namespace structure of TCP/IP’s DNS
load and the network supports the system as structure.
a whole. Domain Name System (DNS) DNS allows
DIX Ethernet The original name for the TCP/IP-capable users anywhere in the world to
Ethernet network technology. Named after the find resources in other companies or countries
original developer companies, Digital, Intel, by using their domain name. Each domain is an
and Xerox. independent namespace for a particular organi-
zation, and DNS servers manage requests for
DLT (digital linear tape) DLT is a form of information about the IP addresses of partic-
backup media that uses magnetic ribbon tape ular DNS entries. DNS is used to manage all
within a cartridge to store data. names on the Internet.
DMA (direct memory access) A method of dongle A special cable that provides a con-
transferring information directly from a mass- nector to a circuit board that doesn’t have one.
storage device such as a hard disk or from an For example, a motherboard may use a dongle
adapter card into memory (or vice versa), to provide a serial port when there is a ribbon
without the information passing through the cable connector for the dongle on the mother-
processor. board, but there is no serial port.
docking station A hardware system into DOS Acronym for disk operating system,
which a portable computer fits so that it can an operating system originally developed by
be used as a full-fledged desktop computer. Microsoft for the IBM PC. DOS exists in two
Docking stations vary from simple port replica- very similar versions: MS-DOS, developed and
tors (that allow you access to parallel and serial marketed by Microsoft for use with IBM-
ports and a mouse) to complete systems (that compatible computers; and PC-DOS, sup-
give you access to network connections, CD- ported and sold by IBM for use only on
ROMs, even a tape backup system or PCMCIA computers manufactured by IBM.
ports).
DOS Environment Variables Variables that
documentation The process of carefully col- specify global things like the path that DOS
lecting information pertaining to an event or a searches to find executables.
DOS extender A small program that extends downtime The measurement of time during
the range of DOS memory. For example, which a computer system is unusable. This
HIMEM.SYS allows DOS access to the includes both time during a failure as well as
memory ranges about 1024K. the time involved in repairing a failure.
DOS prompt A visual confirmation that DRAM See dynamic RAM (DRAM).
DOS is ready to receive input from the key-
board. The default prompt includes the current drawing tablet Pointing device that includes
drive letter followed by a right angle bracket a pencil-like device (called a stylus) for drawing
(for example, C>). You can create your on its flat rubber-coated sheet of plastic.
own custom prompt with the PROMPT drive bay An opening in the system unit into
command. which you can install a floppy disk drive, hard
DOS shell An early graphic user interface for disk drive, or tape drive.
DOS that allowed users to manage files and run drive geometry Term used to describe the
programs through a simple text interface and number of cylinders, read/write heads, and
even use a mouse. It was soon replaced by sectors in a hard disk.
Windows.
drive hole Hole in a floppy disk that allows
dot-matrix printer An impact printer that the motor in the disk drive to spin the disk. Also
uses columns of small pins and an inked ribbon known as the hub hole.
to create the tiny pattern of dots that form the
characters. Dot-matrix printers are available in drive letter In DOS, Windows, and OS/2,
9-, 18-, or 24-pin configurations. the drive letter is a designation used to specify a
particular hard or floppy disk. For example, the
dot pitch In a monitor, the vertical distance first floppy disk is usually referred to as drive A,
between the centers of like-colored phosphors and the first hard disk as drive C.
on the screen of a color monitor, measured in
millimeters (mm). driver See device driver.
dots per inch (dpi) A measure of resolution driver signing In order to prevent viruses
expressed by the number of dots that a device and poorly written drivers from damaging your
can print or display in one inch. system, Windows 2000 uses a process called
driver signing that allows companies to digi-
double data rate synchronous dynamic tally sign their device software, and it also
RAM (DDR SDRAM) Supports data transfer allows administrators to block the installation
on both edges of the clock cycle, which effec- of unsigned drivers.
tively doubles memory chip throughput.
driver software See device driver.
double-density disk A floppy disk with a
storage capacity of 360KB. D-Shell See DB connector.
DSR Abbreviation for data set ready. A hard- duplex. Half-duplex channels can transmit
ware signal defined by the RS-232-C standard only or receive only. Most dial-up services
to indicate that the device is ready. available to PC users take advantage of full-
duplex capabilities, but if you cannot see what
D-Sub See DB connector. you are typing, switch to half duplex. If you are
DTE Abbreviation for data terminal equip- using half duplex and you can see two of every
ment. In communications, any device, such as a character you type, change to full duplex.
terminal or a computer, connected to a commu- duplex printing Printing a document on
nications channel or public network. both sides of the page so that the appropriate
DTR Abbreviation for data terminal ready. A pages face each other when the document is
hardware signal defined by the RS-232-C stan- bound.
dard to indicate that the computer is ready to dynamic electricity See electricity.
accept a transmission.
Dynamic Host Configuration Protocol
dual-booting If a single machine must be (DHCP) DHCP manages the automatic
used for many tasks, it may be necessary for assignment of TCP/IP addressing information
it to have multiple operating systems installed (such as the IP address, subnet mask, default
simultaneously. To do this a boot manager pre- gateway and DNS server). This can save a great
sents the user with a choice of which operating deal of time when configuring and maintaining
system to use at startup. To use a different OS a TCP/IP network.
the user would have to shut down the system,
restart it, and select the other OS. Dynamic Link Library (DLL) files Windows
component files that contain small pieces of
Dual Inline Memory Module See DIMM executable code that are shared between mul-
(Dual Inline Memory Module). tiple Windows programs. They are used to
Dual Inline Package See DIP (Dual Inline eliminate redundant programming in certain
Package). Windows applications. DLLs are used exten-
sively in Microsoft Windows, OS/2, and in
dumb terminal A combination of keyboard Windows NT. DLLs may have filename exten-
and screen that has no local computing power, sions of .dll, .drv, or .fon.
used to input information to a large, remote
computer, often a minicomputer or a main- dynamic RAM (DRAM) A common type of
frame. This remote computer provides all the computer memory that uses capacitors and
processing power for the system. transistors storing electrical charges to repre-
sent memory states. These capacitors lose
duplex In asynchronous transmissions, the their electrical charge, and so they need to be
ability to transmit and receive on the same refreshed every millisecond, during which time
channel at the same time; also referred to as full they cannot be read by the processor. DRAM
chips are small, simple, cheap, easy to make, EDO (Extended Data Out) RAM A type of
and hold approximately four times as much DRAM that increases memory performance by
information as a static RAM (SRAM) chip of eliminating wait states.
similar complexity. However, they are slower
than static RAM. Processors operating at clock EEPROM Acronym for Electrically Eras-
speeds of 25MHz or more need DRAM with able Programmable Read-Only Memory.
access times of faster than 80 nanoseconds (80 A memory chip that maintains its contents
billionths of a second), while SRAM chips can without electrical power, and whose contents
be read in as little as 15 to 30 nanoseconds. can be erased and reprogrammed either within
the computer or from an external source.
dynamic state table A dynamic state table EEPROMs are used where the application
is used for packet filtering. The dynamic state requires stable storage without power but
table keeps track of all communication sessions may have to be reprogrammed.
between stations inside and outside of a fire-
wall. The list is called dynamic because it is EGA Acronym for Enhanced Graphics
constantly updated as communication sessions Adapter. A video adapter standard that pro-
are established and ended. vides medium-resolution text and graphics.
EGA can display 16 colors at the same time
ECC (error correcting circuits) Began with from a choice of 64, with a horizontal resolu-
the Pentium class of computers. Parity checking tion of 640 pixels and a vertical resolution of
provided single-bit error detection for the system 350 pixels. EGA has been superseded by VGA
memory, but did not handle multi-bit errors and and SVGA.
provided no means to correct memory errors.
ECC will detect both single-bit and multi-bit EISA Acronym for Extended Industry Stan-
errors, as well as attempt to correct single- dard Architecture. A PC bus standard that
bit errors. Like parity checking, ECC requires extends the traditional AT-bus to 32 bits and
a setting in the BIOS program to be enabled. allows more than one processor to share the
bus. EISA has a 32-bit data path and, at a bus
edge connector A form of connector con- speed of 8MHz, can achieve a maximum
sisting of a row of etched contacts along the throughput of 33 megabytes per second.
edge of a printed circuit board that is inserted
into an expansion slot in the computer. EISA Configuration Utility (EISA Config)
The utility used to configure an EISA bus
eDirectory Novell directory software expansion card.
(currently version 8.6.1) is a Lightweight
Directory Access Protocol (LDAP)-enabled, Electrically Erasable Programmable Read-
directory-based identity management system Only Memory See EEPROM.
that centralizes the management of user iden- electricity The flow of free electrons from
tities, access privileges, and other network one molecule of substance to another. This
resources. flow of electrons is used to do work.
EPROM Acronym for erasable program- event ID These numbers match a text descrip-
mable read-only memory. A memory chip that tion in a message file within Event Viewer. The
maintains its contents without electrical power, numbers can be used by product support repre-
and whose contents can be erased and repro- sentatives to understand what occurred in the
grammed by removing a protective cover and system. See Event Viewer.
exposing the chip to ultraviolet light.
Event Viewer A Microsoft utility that
ergonomics Standards that define the maintains logs about application, security, and
positioning and use of the body to promote a system events on a computer. You can use
healthy work environment. Event Viewer to view and manage the event
logs, gather information about hardware and
error/event log Error/event logs record pre- software problems, and monitor security
cise information about server events, such as events.
services starting and stopping, or users logging
on and off and accessing resources. This is one exit roller Found on laser and page printers,
of the first places to look for details about a the mechanism that guides the paper out of the
problem. printer into the paper-receiving tray.
ESD See electrostatic discharge (ESD). expanded memory page frame See page
frame.
ESD mat Preventive measure to guard
against the effects of ESD. The excess charge expanded memory specification (EMS)
is drained away from any item that comes in The original version of the Lotus-Intel-Microsoft
contact with it. Expanded Memory Specification (LIM EMS)
that lets DOS applications use more than 640KB
ESD wrist strap An antistatic device that of memory space.
attaches between the wrist of the user and a
ground, used to direct static charges away from expansion bus An extension of the main
the user and computer components. Often computer bus that includes expansion slots for
includes an embedded resistor. use by compatible adapters, such as memory
boards, video adapters, hard disk controllers,
Ethernet A network technology based on the and SCSI interface cards.
IEEE 802.3 CSMA/CD standard. The original
Ethernet implementation specified 10Mbps, expansion card A device that can be installed
baseband signaling, coaxial cable, and CSMA/ into a computer’s expansion bus.
CD media access.
expansion slot One of the connectors on the
even parity A technique that counts the expansion bus that gives an adapter access to
number of 1’s in a binary number and, if the system bus. You can add as many addi-
the number of 1’s is not an even number, tional adapters as there are expansion slots
adds a digit to make it even. (See also parity.) inside your computer.
extended data output RAM (EDO RAM) A allows the processor to talk to other devices.
variant of dynamic random access memory that This component allows the CPU to talk to the
helps improve memory speed and performance other devices in the computer and vice versa.
by altering the timing and sequence of signals
that activate the circuitry for accessing memory external cache memory Separate expansion
locations. board that installs in a special processor-direct
bus that contains cache memory.
extended DOS partition A further optional
division of a hard disk, after the primary DOS external commands Commands that are not
partition, that functions as one or more addi- contained within COMMAND.COM. They are
tional logical drives. A logical drive is simply an represented by a .COM or .EXE extension.
area of a larger disk that acts as though it were external hard disk A hard disk packaged in
a separate disk with its own drive letter. its own case with cables and an independent
Extended Graphics Array See XGA. power supply rather than a disk drive housed
inside and integrated with the computer’s
Extended Industry Standard Architecture system unit.
See EISA.
external modem A stand-alone modem,
extended memory manager A device separate from the computer and connected by a
driver that supports the software portion of serial cable. LEDs on the front of the chassis
the extended memory specification in an IBM- indicate the current modem status and can be
compatible computer. useful in troubleshooting communications
problems. An external modem is a good buy if
Extended Memory System (XMS) you want to use a modem with different com-
Memory above 1,024KB that is used by puters at different times or with different types
Windows and Windows-based programs. of computer.
This type of memory cannot be accessed
unless the HIMEM.SYS memory manager is failback The process by which workload is
loaded in the DOS CONFIG.SYS with a line transferred back to the now-operational server
like DEVICE=HIMEM.SYS. after a fault resulting in failover.
extended partition If all of the space on a failover The process by which a backup
drive is not used in the creation of the drive’s server, network component, or service takes
primary partition, a second partition can be up the workload of a failed member.
created out of the remaining space. Called the
extended partition, this second partition can FAQ Acronym for Frequently Asked Ques-
hold one or more logical drives. tion. A document that lists some of the more
commonly asked questions about a product or
external bus An external component con- component. When researching a problem, the
nected through expansion cards and slots FAQ is usually the best place to start.
FAT See file allocation table (FAT). field replacement unit See FRU (field
replacement unit).
fault tolerance A method of preparing a net-
work system, or some part of that system, to file allocation table (FAT) A table main-
improve its ability to function despite a hard- tained by DOS or OS/2 that lists all the clusters
ware, software, or system fault. This includes available on a disk. The FAT includes the loca-
installation of multiples of the same component tion of each cluster, as well as whether it is in
to prevent a single point of failure. use, available for use, or damaged in some way
and therefore unavailable. FAT also keeps
fax modem An adapter that fits into a track of which pieces belong to which file.
PC expansion slot and provides many of the
capabilities of a full-sized fax machine, but at file compression program An application
a fraction of the cost. program that shrinks program or data files, so
that they occupy less disk space. The file must
FDDI See fiber distributed data interface then be extracted or decompressed before you
(FDDI). can use it. Many of the most popular file com-
FDISK.EXE The DOS utility that is used to pression programs are shareware, like WinZIP,
partition hard disks for use with DOS. PKZIP, LHA, and StuffIt for the Macintosh,
although utility packages like PC Tools from
feed roller The rubber roller in a laser printer Central Point Software also contain file com-
that feeds the paper into the printer. pression programs.
file server A networked computer used to EEPROM—that maintains its contents when
store files for access by other client computers power is removed.
on the network. On larger networks, the file
server may run a special network operating fixed disk A disk drive that contains several
system; on smaller installations, the file server disks (also known as platters) stacked together
may run a PC operating system supplemented and mounted through their centers on a small
by peer-to-peer networking software. rod. The disks rotate as read/write heads float
above the disks that make, modify, or sense
file sharing In networking, the sharing of changes in the magnetic positions of the coat-
files via the network file server. Shared files can ings on the disk.
be read, reviewed, and updated by more than
one individual. Access to the file or files is often fixed resistor Type of resistor that is used to
regulated by password protection, account or reduce the current by a certain amount. Fixed
security clearance, or file locking, to prevent resistors are color coded to identify their resis-
simultaneous changes from being made by tance values and tolerance bands.
more than one person at a time. flash memory A special form of non-volatile
File Transfer Protocol (FTP) FTP is used to EEPROM that can be erased at signal levels
transfer large files across the Internet or any normally found inside the PC, so that you can
TCP/IP network. Special servers, called FTP reprogram the contents with whatever you like
servers, store information and then transfer without pulling the chips out of your computer.
it back to FTP clients as needed. FTP servers Also, once flash memory has been programmed,
can also be secured with a username and pass- you can remove the expansion board it is mou-
word to prevent unauthorized downloading nted on and plug it into another computer if
(retrieval of a file from the server) or uploading you wish.
(placing of a file on the server). flash utility A small software program cre-
FILER Novell NetWare 3.x utility used to ated to update the firmware software that runs
manage files on a network server. individual hardware components.
firewall A network component (either hard- flatbed scanner An optical device used to
ware or software) that provides a secure barrier digitize a whole page or a large image.
between networks or network segments. Fire- flat-panel display In laptop and notebook
walls use packet filtering, application filtering, computers, a very narrow display that uses one
or circuit-level filtering to prevent attacks or of several technologies, such as electrolumines-
unauthorized access. cence, LCD, or thin film transistors.
FireWire See IEEE-1394. flavor A term used in the Unix world to
firmware Any software stored in a form denote a variation or distribution of the Unix
of read-only memory—ROM, EPROM, or operating system.
floating-point calculation A calculation of floppy drive cable A cable that connects the
numbers whose decimal point is not fixed but floppy drive(s) to the floppy drive controller.
moves or floats to provide the best degree of The cable is a 34-wire ribbon cable that usually
accuracy. Floating-point calculations can be has three connectors.
implemented in software, or they can be per-
formed much faster by a separate floating- floppy drive interfaces A connector on a
point processor. motherboard used to connect floppy drives to
the motherboard.
floating-point processor A special-purpose,
secondary processor designed to perform floptical disk A removable optical disk with
floating-point calculations much faster than a recording capacity of between 20 and 25
the main processor. megabytes.
floppy disk A flat, round, magnetically coated flux transition Presence or absence of a mag-
plastic disk enclosed in a protective jacket. Data netic field in a particle of the coating on the
is written onto the floppy disk by the disk drive’s disk. As the disk passes over an area the elec-
read/write heads as the disk rotates inside the tromagnet is energized to cause the material to
jacket. It can be used to distribute commercial be magnetized in a small area.
software, to transfer programs from one com- footprint The amount of desktop or floor
puter to another, or to back up files from a hard space occupied by a computer or display ter-
disk. Floppy disks in personal computing are of minal. By extension, also refers to the size of
two physical sizes, 5.25" or 3.5", and a variety software items such as applications or oper-
of storage capacities. The 5.25" floppy disk ating systems.
has a stiff plastic external cover, while the 3.5"
floppy disk is enclosed in a hard plastic case. form factor There are two primary form
IBM-compatibles use 5.25" and 3.5" disks, and factors for server machines—tower or rack
the Macintosh uses 3.5" disks. mount. The form factor defines the type of case
that the server is housed in, and which type you
floppy disk controller The circuit board that have will make a significant difference as to
is installed in a computer to translate signals what steps you take when moving the computer
from the CPU into signals that the floppy disk into its place in the server room.
drive can understand. Often it is integrated into
the same circuit board that houses the hard disk FORMAT.COM External DOS command
controller; it can, however, be integrated into that prepares the partition to store information
the motherboard in the PC. using the FAT system as required by DOS and
Windows 9x.
floppy disk drive A device used to read and
write data to and from a floppy disk. Floppy formatter board Type of circuit board
disk drives may be full-height drives, but more that takes the information the printer receives
commonly these days they are half-height from the computer and turns it into commands
drives. for the various components in the printer.
formatting 1. To apply the page-layout com- between a cable and associated devices. FPT
mands and font specifications to a document (Force Perfect Termination) is an advanced form
and produce the final printed output. 2. The of active termination.
process of initializing a new, blank floppy disk
or hard disk so that it can be used to store full AT A type of motherboard form factor
information. where the motherboard is the same size as the
original IBM AT computer’s motherboard.
form factors Physical characteristics and
dimensions of drive styles. full backup Creates a full duplication of
all data each time that the backup process is
form feed (FF) A printer command that executed.
advances the paper in the printer to the top
of the next page by pressing the FF button full-duplex communications Communica-
on the printer. tions where both entities can send and receive
simultaneously.
fragmentation A disk storage problem
that exists after several smaller files have been function keys The set of programmable keys
deleted from a hard disk. The deletion of files on the keyboard that can perform special tasks
leaves the disk with areas of free disk space assigned by the current application program.
scattered throughout the disk. The fact that fuser Device on an EP Printer that uses two
these areas of disk space are located so far apart rollers to heat the toner particles and melt them
on the disk causes slower performance because to the paper. The fuser is made up of a halogen
the disk read/write heads have to move all heating lamp, a Teflon-coated aluminum fusing
around the disk’s surface to find the pieces roller, and a rubberized pressure roller. The
of one file. lamp heats the aluminum roller. As the paper
free memory An area of memory not passes between the two rollers, the rubber roller
currently in use. presses the paper against the heated roller. This
causes the toner to melt and become a perma-
Frequently Asked Question See FAQ. nent image on the paper.
FPT (Force Perfect Termination) Uses diode game port A DB-15 connector used to con-
switching and biasing to fill any fluctuations nect game devices (like joysticks) to a computer.
gigabyte One billion bytes; however, bytes hand-held scanner Type of scanner that is
are most often counted in powers of 2, and so small enough to be held in your hand. Used
a gigabyte becomes 2 to the 30th power, or to digitize a relatively small image or artwork,
1,073,741,824 bytes. it consists of the controller, CCD, and light
source contained in a small enclosure with
GPF See General Protection Fault (GPF). wheels on it.
graphical user interface (GUI) A graphics- hard disk controller An expansion board
based user interface that allows users to select that contains the necessary circuitry to control
files, programs, or commands by pointing to and coordinate a hard disk drive. Many hard
pictorial representations on the screen rather disk controllers are capable of managing more
than by typing long, complex commands from than one hard disk, as well as floppy disks and
a command prompt. Application programs even tape drives.
hard disk drive A storage device that uses a updated by Microsoft for each of its Windows
set of rotating, magnetically coated disks called OSs) of all hardware currently known to be
platters to store data or programs. A typical compatible with a particular operating system.
hard disk platter rotates at up to 7200rpm, and Windows 98, NT, and 2000 all have their
the read/write heads float on a cushion of air own HCL.
from 10 to 25 millionths of an inch thick so that
the heads never come into contact with the hardware failure A computer failure that
recording surface. The whole unit is hermetically involves a hardware component that will not
sealed to prevent airborne contaminants from function as expected. Hardware failures often
entering and interfering with these close toler- require that the device be replaced rather than
ances. Hard disks range in capacity from a few repaired.
tens of megabytes to several gigabytes of storage hardware interrupt An interrupt or request
space; the bigger the disk, the more important a for service generated by a hardware device such
well thought out backup strategy becomes. as a keystroke from the keyboard or a tick from
hard disk interfaces A connector on a moth- the clock. Because the processor may receive
erboard that makes it possible to connect a several such signals simultaneously, hardware
hard disk to the motherboard. interrupts are usually assigned a priority level
and processed according to that priority.
hard disk system A disk storage system con-
taining the following components: the hard hardware ports See I/O address.
disk controller, hard disk, and host adapter. head The electromagnetic device used to read
hard memory error A reproducible memory from and write to magnetic media such as hard
error that is related to hardware failure. and floppy disks, tape drives, and compact
discs. The head converts the information read
hard reset A system reset made by pressing into electrical pulses sent to the computer for
the computer’s reset button or by turning the processing.
power off and then on again.
header Information that is attached to the
hard shutdown An unplanned shutdown beginning of a network data frame.
that involves a complete loss of power to the
operating system and all open programs. Heartbeat A signal generated periodically by
hardware or software to indicate that it is still
hardware All the physical electronic compo- running.
nents of a computer system, including periph-
erals, printed-circuit boards, displays, and heat sink A device that is attached to an
printers. electronic component that removes heat from
the component by induction. It is often a plate
hardware compatibility list (HCL) An HCL of aluminum or metal with several vertical
is a list (that is maintained and regularly fingers.
hertz Abbreviated Hz. A unit of frequency High Voltage Differential (HVD) A SCSI
measurement; 1 hertz equals one cycle per signalling method that supports a throughput
second. of 40MBps at a cable length of 25 meters.
hexadecimal Abbreviated hex. The base-16 high-voltage probe A device used to drain
numbering system that uses the digits 0 to 9, away voltage from a monitor before testing. It
followed by the letters A to F (equivalent to the is a pencil shaped device with a metal point and
decimal numbers 10 through 15). Hex is a very a wire lead with a clip.
convenient way to represent the binary num-
bers computers use internally, because it fits HIMEM.SYS The DOS and Microsoft Win-
neatly into the 8-bit byte. All of the 16 hex dows device driver that manages the use of
digits 0 to F can be represented in 4 bits, and so extended memory and the high memory area
two hex digits (one digit for each set of 4 bits) on IBM-compatible computers. HIMEM.SYS
can be stored in a single byte. This means that not only allows your application programs to
1 byte can contain any one of 256 different hex access extended memory, it oversees that area
numbers, from 0 through FF. Hex numbers are to prevent other programs from trying to use
often labeled with a lowercase h (for example, the same space at the same time. HIMEM.SYS
1234h) to distinguish them from decimal must be loaded by a DEVICE command in
numbers. your CONFIG.SYS file; you cannot use
DEVICEHIGH.
high-density disk A floppy disk with more
recording density and storage capacity than a HMA See high memory area (HMA).
double-density disk. home page On the Internet, an initial
high-level format The process of preparing starting page. A home page may be related to a
a floppy disk or a hard disk partition for use by single person, a specific subject, or a corpora-
the operating system. In the case of DOS, a tion and is a convenient jumping-off point to
high-level format creates the boot sector, the other pages or resources.
file allocation table (FAT), and the root host The central or controlling computer in a
directory. networked or distributed processing environ-
high memory area (HMA) In an IBM- ment, providing services that other computers or
compatible computer, the first 64K of extended terminals can access via the network. Computers
memory above the 1MB limit of 8086 and connected to the Internet are also described as
8088 addresses. Programs that conform to hosts, and can be accessed using FTP, Telnet,
the extended memory specification can use Gopher, or a browser.
this memory as an extension of conventional host adapter Translates signals from the
memory although only one program can use or hard drive and controller to signals the com-
control HMA at a time. puter’s bus can understand.
host name The name by which a computer is iFolder Novell iFolder is an Internet service
known on a TCP/IP network. This name must software that simplifies, accelerates, and secures
be unique within the domain that the machine access to your data across the Internet. Files are
is in. In Windows 2000 the computer name is kept dynamically updated and accessible wher-
always the same as the machine’s host name, ever the user is (as long as Internet access is
while in Windows 9x the two can be different. available).
hot backup site A hot backup site is an exact IMAP (Internet Message Access Protocol)
copy of the original business located in a dif- A protocol for retrieving e-mail messages. The
ferent physical space. This includes equipment, newest-release IMAP 4 is similar to POP3 but
software, and data. It is extremely expensive to contains advanced features.
create and maintain but the easiest to transition
to should the need arise. incremental backup A backup method that
backs up the files that have changed since the
hot plug A technology that allows installa- last backup (full, differential, or incremental)
tion of a hardware component, such as a hard was performed.
disk, in a computer without first powering
down the computer. Industry Standard Architecture (ISA) bus
An expansion bus used in the original AT
hub A connectivity device used to link several motherboards. ISA expansion busses transfer
computers together into a physical star topology. at 16 bits and are configured through the use of
They repeat any signal that comes in on one port jumpers on the expansion card.
and copies it to the other ports.
in-place upgrade When reformatting a hard
HVPS See high-voltage power supply disk on a server and performing a clean install
(HVPS). of the OS, you will find that most operating
systems provide in-place upgrade functionality.
hybrid topology A mix of more than one When you perform the upgrade, the configura-
topology type used on a network. tion settings for the old OS are migrated to the
Hypertext Transfer Protocol (HTTP) HTTP new OS. In many ways, an in-place upgrade is
is the protocol of the World Wide Web, and is the easiest option when you need to upgrade
used to send and receive web pages and other your OS.
content from an HTTP server (web server). integrated Describes a motherboard or other
HTTP makes use of linked pages, accessed via computer component that contains embedded
hyperlinks, which are words or pictures that, parts normally provided as separate components.
when clicked on, take you to another page.
IntranetWare Novell released IntranetWare in
I/O (input/output channels) The transfer 1996 and 1997. It is a product family designed
of data between a computer and a peripheral for intranet data sharing.
device. See I/O address.
I/O address Lines on a bus used to allow IBM PS/2 A series of personal computers
the CPU to send instructions to the devices using several different Intel processors, intro-
installed in the bus slots. Each device is given its duced by IBM in 1987. The main difference
own communication line to the CPU. These between the PS/2 line and earlier IBM personal
lines function like one-way (unidirectional) computers was a major change to the internal
mailboxes. bus. Previous computers used the AT bus, also
known as industry-standard architecture, but
I/O ports See I/O address. IBM used the proprietary micro channel archi-
IP spoofing A hacking method that involves tecture in the PS/2 line instead. Micro channel
a hacker tricking a private network into architecture expansion boards will not work in
believing that his machine’s IP address belongs a computer using ISA. See IBM-compatible
within the private network. computer.
daisy-wheel, and line printers are all impact an OverDrive processor can increase application
printers, whereas laser printers are not. performance by an estimated 40 to 70 percent.
incremental backup A backup of a hard intelligent hub A class of hub that can be
disk that consists of only those files created or remotely managed on the network.
modified since the last backup was performed.
interface Any port or opening that is specifi-
Industry Standard Architecture See ISA. cally designed to facilitate communication
between two entities.
ini file Text file that is created by an instal-
lation program when a new Windows applica- interface software The software for a
tion is installed. INI files contain settings for particular interface that translates software
individual Windows applications as well as for commands into commands that the printer
Windows itself. can understand.
initialization commands A set of commands interlacing A display technique that uses two
sent to a modem to prepare it to function. passes over the monitor screen, painting every
other line on the screen the first time and then
inoculating The process of protecting a com- filling in the rest of the lines on the second pass.
puter system against virus attacks by installing It relies on the physiological phenomenon
antivirus software. known as persistence of vision to produce the
input/output address See I/O address. effect of a continuous image.
integrated circuit (IC) Also known as a chip. interleaving Interleaving involves skipping
A small semiconductor circuit that contains sectors to write the data, instead of writing
many electronic components. sequentially to every sector. This evens out the
data flow and allows the drive to keep pace
integrated drive electronics See IDE. with the rest of the system. Interleaving is given
in ratios. If the interleave is 2:1, the disk skips 2
Integrated Services Digital Network See minus 1, or 1 sector, between each sector it
ISDN. writes (it writes to one sector, skips one sector,
then writes to the next sector following). Most
integrated system boards A system board
drives today use a 1:1 interleave, because
that has most of the computer’s circuitry
today’s drives are very efficient at transferring
attached, as opposed to having been installed
information.
as expansion cards.
International Organization for Standardiza-
Intel OverDrive OverDrive chips boost
tion (ISO) An international standards-making
system performance by using the same clock
body, based in Geneva, that establishes global
multiplying technology found in the Intel
standards for communications and information
80486DX-2 and DX4 chips. Once installed,
exchange. (Note that ISO is not an acronym, but 80x86 family of processors supports 256
rather a Greek word meaning equal.) prioritized interrupts, of which the first 64
are reserved for use by the system hardware
Internet The Internet (Net) is the global TCP/ or by DOS.
IP network that now extends into nearly every
office and school. The World Wide Web is the interrupt request (IRQ) A hardware inter-
most visible part of the Internet, but e-mail, rupt signals that an event has taken place that
newsgroups, and FTP (to name just a few) are requires the processor’s attention, and may
also important parts of the Internet. come from the keyboard, the input/output
ports, or the system’s disk drives. In the PC, the
Internet address An IP or domain address main processor does not accept interrupts from
which identifies a specific node on the Internet. hardware devices directly; instead interrupts
Internet Protocol See IP. are routed to an Intel 8259A Programmable
Interrupt Controller. This chip responds to
Internet Service Provider (ISP) An ISP is each hardware interrupt, assigns a priority,
a company that provides Internet access for and forwards it to the main processor.
users. Generally ISPs are local or regional com-
panies that provide Internet access and e-mail interrupt request (IRQ) lines Hardware
addresses to users. lines that carry a signal from a device to the
processor.
internetwork Any TCP/IP network that
spans router interfaces is considered to be an IP Abbreviation for Internet Protocol. The
internetwork. This means that anything from a underlying communications protocol on which
small office with two subnets to the Internet the Internet is based. IP allows a data packet to
itself can be described as an internetwork. travel across many networks before reaching its
final destination.
interrupt A signal to the processor generated
by a device under its control (such as the system IP address In order to communicate on a
clock) that interrupts normal processing. An TCP/IP network, each machine must have a
interrupt indicates that an event requiring the unique IP address. This address is in the form
processor’s attention has occurred, causing x.x.x.x where x is a number from 0 to 255.
the processor to suspend and save its current IPCONFIG Used on Windows 2000 to view
activity and then branch to an interrupt service current IP configuration information and to
routine. This service routine processes the inter- manually request updated information from
rupt (whether it was generated by the system a DHCP server.
clock, a keystroke, or a mouse click) and when
it’s complete, returns control to the suspended IPP (Internet Printing Protocol) An Internet
process. In the PC, interrupts are often divided printing protocol designed by Novell and Xerox,
into three classes: internal hardware, external and supported by IETF. IPP allows for printing
hardware, and software interrupts. The Intel over the Internet with four main function
areas: finding a printer’s capabilities, kilobits per second Abbreviated Kbps. The
allowing users to submit print jobs to a number of bits, or binary digits, transmitted
printer, allowing users to find printer status, every second, measured in multiples of 1024
and cancelling a previously submitted job. bits per second. Used as an indicator of com-
munications transmission rate.
IRQ See interrupt request (IRQ).
kilobyte Abbreviated K, KB, or Kbyte. 1024
ISA (Industry Standard Architecture) bus bytes.
The 16-bit bus design was first used in IBM’s
PC/AT computer in 1984. ISA has a bus speed of knowledge base A collection of regularly
8MHz and a maximum throughput of 8MBps. updated information pertaining to a specific
EISA is a 32-bit extension to this standard bus. topic.
latency The time that elapses between issuing uses electric current to align crystals in a special
a request for data and actually starting the data liquid. The rod-shaped crystals are contained
transfer. In a hard disk, this translates into the between two parallel transparent electrodes,
time it takes to position the disk’s read/write and when current is applied, they change their
head and rotate the disk so that the required orientation, creating a darker area. Many LCD
sector or cluster is under the head. Latency is screens are also backlit or side-lit to increase
just one of many factors that influence disk visibility and reduce the possibility of eyestrain.
access speeds.
line conditioner A device placed between a
LCD See liquid crystal display (LCD). computer and the electrical source. Line condi-
tioners protect electronic equipment from
LCD monitor A monitor that uses liquid power surges, spikes, and brownouts.
crystal display technology. Many laptop and
notebook computers use LCD displays because Linux A freely available operating system
of their low power requirements. based on Unix. Linux’s open architecture lends
it especially well to custom engineering to meet
least significant bit (LSB) In a binary individual needs.
number, the lowest-order bit. That is, the right-
most bit. So, in the binary number 0001, the 1 Load Balancing A strategy in which requests
is the least significant bit. are distributed across all available channels.
The idea behind Load Balancing is to equalize
LED page printer A type of EP process the traffic stress across multiple devices rather
printer that uses a row of LEDs instead of a than place a major burden on one. Similar to
laser to expose the EP drum. clustering, in that two or more servers team up
LED panels Small light panels located either to do a single job: What distinguishes Load Bal-
on the front panel, or within a computer case, ancing, though, is that each server retains its
designed to provide visual information as to the own identity and often keeps its own copy of
status of the computer’s hardware operations. needed resources.
legacy An application in which a company local area network (LAN) A group of com-
has already invested heavily, and which must puters and associated peripherals connected by
remain operational. Legacy applications can a communications channel capable of sharing
limit upgrades. files and other resources between several users.
letter quality (LQ) A category of dot-matrix local bus A PC bus specification that allows
printer that can print characters that look very peripherals to exchange data at a rate faster
close to the quality a laser printer might produce. than the 8 megabytes per second allowed by the
ISA (Industry Standard Architecture) and the
liquid crystal display (LCD) A display tech- 32 megabytes per second allowed by the EISA
nology common in portable computers that (Extended Industry Standard Architecture)
definitions. Local bus can achieve a maximum use for the rest of that session. Users can either
data rate of 133 megabytes per second with a log on to a workgroup or to a network security
33MHz bus speed, 148 megabytes per second entity (such as the Active Directory).
with a 40MHz bus, or 267 megabytes per
second with a 50MHz bus. low-level format The process that creates
the tracks and sectors on a blank hard disk
local resources Files or folders that are or floppy disk; sometimes called the physical
physically located on the machine the user is format. Most hard disks are already low-level
sitting at are referred to as local to that user. formatted; however, floppy disks receive both a
Windows 2000 has the ability to enforce local low- and a high-level format (or logical format)
security, while Windows 9x does not. when you use the DOS or OS/2 command
FORMAT.
log file A file that details errors and warnings
generated, when they occurred, an event ID, Low Voltage Differential (LVD) A signaling
and sometimes an event description. Normally method used in SCSI communication. A low-
log files are saved in a text format on a local noise, low-power, low-amplitude method for
hard drive. high-speed data transmission.
logic board The sturdy sheet or board to LPTx ports In DOS, the device name used
which all other components on the computer to denote a parallel communications port,
are attached. These components consist of the often used with a printer. DOS supports three
CPU, underlying circuitry, expansion slots, parallel ports: LPT1, LPT2, and LPT3, and
video components, and RAM slots, just to OS/2 adds support for network ports LPT4
name a few. Also known as a motherboard or through LPT9.
planar board.
LUN (logical unit number) A unique identi-
logical drive Created within an extended fier on a SCSI bus that enables it to differentiate
partition, a logical drive is used to organize between up to eight separate devices on a single
space within the partition, which can be SCSI ID.
accessed through the use of a drive letter.
magneto-optical (MO) drives An erasable,
logical memory The way memory is organized high-capacity, removable storage device similar
so it can be accessed by an operating system. to a CD-ROM drive. Magneto-optical drives
use both magnetic and laser technology to
logical topology Topology that defines how write data to the disk and use the laser to read
the data flows in a network. that data back again. Writing data takes two
logon The process of logging on submits passes over the disk, an erase pass followed by
your username and password to the network the write pass, but reading can be done in just
and gives you the network credentials you will one pass and, as a result, is much faster.
main motor A printer stepper motor that is megabit (Mbit) Usually 1,048,576 binary
used to advance the paper. digits or bits of data. Often used as equivalent
to 1 million bits.
MAN (metropolitan area network) A net-
work that is larger than a local area network megabits per second (Mbps) A measurement
but smaller than a wide area network. MANs of the amount of information moving across a
often contain multiple redundant links between network or communications link in 1 second,
physical locations to ensure connectivity. measured in multiples of 1,048,576 bits.
memory optimization The process of performs the reverse process and demodulates
making the most possible conventional the data from the carrier signal.
memory available to run DOS programs.
modified frequency modulation (MFM)
memory refresh An electrical signal that encoding The most widely used method
keeps the data stored in memory from of storing data on a hard disk. Based on an
degrading. earlier technique known as frequency modula-
tion (FM) encoding, MFM achieves a two-fold
mesh topology Type of logical topology increase in data storage density over standard
where each device on a network is connected FM recording, but it is not as efficient a space
to every other device on the network. This saver as run-length limited encoding.
topology uses routers to search multiple paths
and determine the best path. Molex connector See standard peripheral
power connector.
Messaging Application Programming Inter-
face (MAPI) The MAPI interface is used to monitor A video output device capable of
control how Windows interacts with messaging displaying text and graphics, often in color.
applications such as e-mail programs. MAPI
makes most of the functions of e-mail trans- Monitor A Novell NetWare loadable module
parent and allows programmers to just write for monitoring the status and performance of
the application, not the whole messaging the NetWare server and network activity. Mon-
system. itor also observes memory and processor use.
Microsoft Disk Operating System See monitoring agents Software programs that
MS-DOS. assist the process of performance monitoring
by collecting and reporting data.
modem Contraction of modulator/
demodulator, a device that allows a computer monochrome monitor A monitor that can
to transmit information over a telephone line. display text and graphics in one color only. For
The modem translates between the digital sig- example, white text on a green background or
nals that the computer uses and analog signals black text on a white background.
suitable for transmission over telephone lines.
When transmitting, the modem modulates monthly rotation A backup rotation cycle in
the digital data onto a carrier signal on the which, for example, each Friday’s tape is kept
telephone line. When receiving, the modem for a month. Data errors discovered within the
week could be corrected by restoring the appro- MS-DOS Acronym for Microsoft Disk Oper-
priate daily backup. Errors discovered within ating System. MS-DOS, like other operating
the month could be corrected by restoring from systems, allocates system resources (such as
a weekly backup, limiting data loss to changes hard and floppy disks, the monitor, and the
made subsequent to that backup. printer) to the applications programs that need
them. MS-DOS is a single-user, single-tasking
most significant bit (MSB) In a binary operating system, with either a command-line
number, the highest-order bit. That is, the left- interface or a shell interface.
most bit. In the binary number 10000000, the
1 is the most significant bit. MTBF (mean time between failures) A cal-
culation of the average time between computer
motherboard The main printed circuit failures. This can include hardware, software,
board in a computer that contains the central or a combination of the two.
processing unit, appropriate coprocessor and
support chips, device controllers, memory, and multimedia A computer technology that
also expansion slots to give access to the com- displays information by using a combination of
puter’s internal bus. Also known as a logic full-motion video, animation, sound, graphics,
board or system board. and text with a high degree of user interaction.
mouse A small input device with one or more multimeter Electronic device used to
buttons used for pointing or drawing. As you measure and test ohms, amperes, and volts.
move the mouse in any direction, an on-screen
mouse cursor follows the mouse movements; multimode A fiber optic transmission
all movements are relative. Once the mouse method that uses light emitting diodes as the
pointer is in the correct position on the screen, optical transmission method.
you can press one of the mouse buttons to ini- multiplexer A network device that combines
tiate an action or operation; different user multiple data streams into a single stream for
interfaces and file programs interpret mouse transmission. Multiplexers can also break out
clicks in different ways. the original data streams from a single, multi-
MSBACKUP A DOS program that allows the plexed stream.
user to make backup copies of all the programs multipurpose server A server that has
and data stored on the hard disk. This program more than one use. For example, a multi-
is menu-driven and allows the user to set up purpose server can be both a file server and
options that can be used each time you back up a print server.
the hard drive.
multistation access unit (MAU) The cen-
MSD (Microsoft System Diagnostics) Pro- tral device in a Token Ring network that pro-
gram that allows the user to examine many vides both the physical and logical connections
different aspects of a system’s hardware and to the stations.
software setup.
into a personal computer or server and works by performing the functions of both server and
with the network operating system to control workstation, this type of server does neither
the flow of information over the network. The function very well. Nondedicated servers are
network interface card is connected to the net- typically used in peer-to-peer networks.
work cabling (twisted-pair, coaxial or fiber-
optic cable), which in turn connects all the nonintegrated system boards A type of
network interface cards in the network. motherboard where the various subsystems
(video, disk access, etc.) are not integrated
Network layer The third of seven layers of into the motherboard, but rather placed on
the International Organization for Standard- expansion cards that can be removed and
ization’s Open Systems Interconnection (ISO/ upgraded.
OSI) model for computer-to-computer commu-
nications. The Network layer defines protocols non-interlaced Describes a monitor in which
for data routing to ensure that the information the display is updated (refreshed) in a single
arrives at the correct destination node. pass, painting every line on the screen. Inter-
lacing takes two passes to paint the screen,
network security provider In a network painting every other line on the first pass, and
environment, it is often easier to manage the then sequentially filling in the other lines on
network by having centralized user ID and the second pass. Non-interlaced scanning,
password storage. Examples of this type of while more expensive to implement, reduces
centralized system are Windows 2000’s Active unwanted flicker and eyestrain.
Directory or NetWare’s NDS.
non-natural disasters Disastrous data loss
Newsgroup An online discussion group originating from human sources such as elec-
which shares information on a specific topic of trical fires, theft, and vandalism.
interest. Messages can be posted and replied to
on a newsgroup. NOS (Network Operating System) Soft-
ware that runs on the server and controls and
NIC See network interface card (NIC). manages the network. The NOS controls the
communication with resources and the flow of
NLMs (NetWare Loadable Modules) Soft- data across the network.
ware that enhances or provides additional
functions in a NetWare 3.x or higher server. notebook computer A small portable com-
puter, about the size of a computer book, with
node In communications, any device a flat screen and a keyboard that fold together.
attached to the network. A notebook computer is lighter and smaller
nonconductor Any material that does not than a laptop computer. Some models use flash
conduct electricity. memory rather than conventional hard disks
for program and data storage, while other
nondedicated server A computer that can models offer a range of business applications
be both a server and a workstation. In practice, in ROM. Many offer PCMCIA expansion slots
for additional peripherals such as modems, fax Open Systems Interconnection (OSI)
modems, or network connections. model See OSI (Open Systems Interconnec-
tion) model.
Novell NetWare A server operating system
created by Novell. The most recent version of OpenServer OpenServer is a family of client
NetWare is version 6. and server operating systems for the Intel plat-
form based on the Unix operating system.
NTDS (NT Directory Services) Previous
versions of Windows Server used NTDS (NT operating system (OS) The software
Directory Services) to control user accounts responsible for allocating system resources,
and groups security. Windows 2000 switched including memory, processor time, disk
to Active Directory. space, and peripheral devices such as printers,
modems, and the monitor. All application pro-
NTFS The NT File System was created to grams use the operating system to gain access
provide enhanced security and performance to these system resources as they are needed.
for the Windows NT operating system, and it The operating system is the first program
has been adopted and improved upon by Win- loaded into the computer as it boots, and it
dows 2000. NTFS provides Windows 2000 remains in memory at all times thereafter.
with local file security, file auditing, compres-
sion, and encryption options. It is not compat- optical disk A disk that can be read from and
ible with Windows 9x or DOS. written to, like a fixed disk but, like a CD, is
read with a laser.
null modem A short RS-232-C cable that
connects two personal computers so that they optical drive A type of storage drive that
can communicate without the use of modems. uses a laser to read from and write to the
The cable connects the two computers’ serial storage medium.
ports, and certain lines in the cable are crossed
over so that the wires used for sending data by optical mouse A mouse that uses a special
one computer are used for receiving data by the mouse pad and a beam of laser light. The beam
other computer and vice versa. of light shines onto the mouse pad and reflects
back to a sensor in the mouse. Special small
numeric keypad A set of keys to the right lines crossing the mouse pad reflect the light
of the main part of the keyboard, used for into the sensor in different ways to signal the
numeric data entry. position of the mouse.
odd parity A technique that counts the optical scanner See scanner.
number of 1s in a binary number and, if
the number of 1s total is not an odd optical touch screen A type of touch screen
number, adds a digit to make it odd. See that uses light beams on the top and left side
also parity. and optical sensors on the bottom and right
side to detect the position of your finger when
ohm Unit of electrical resistance. you touch the screen.
option disk A disk that contains the device- IP addresses are used to assess data packets and
specific configuration files for the device being then compared to a static list to grant or deny
installed into a MCA bus computer. access. Packet filtering is a method used by
firewalls.
optomechanical mouse Type of mouse that
contains a round ball that makes contact with page description language Describes the
two rollers. Each roller is connected to a wheel whole page being printed. The controller in the
that has small holes in it. The wheel rotates printer interprets these commands and turns
between the arms of a U-shaped mechanism them into laser pulses or firing print wires.
that holds a light on one arm and an optical
sensor on the other. As the wheels rotate, the page frame The special area reserved in
light flashes coming through the holes indicate upper memory that is used to swap pages of
the speed and direction of the mouse, and these memory into and out of expanded memory.
values are transmitted to the computer and the page printers Type of printer that handles
mouse control software. print jobs one page at a time instead of one line
OS/2 An early 32-bit operating system at a time.
originally designed by Microsoft and IBM in pages 16K chunks of memory used in
partnership, and then sold by IBM exclusively. expanded memory.
OSI (Open Systems Interconnection) paging The process of swapping memory
Model A protocol model, developed by the to an alternate location, such as to and from
International Organization for Standardization a page frame in expanded memory or to and
(ISO), that was intended to provide a common from a swap file.
way of describing network protocols. This
model describes a seven-layered relationship PAN (personal area network) A small net-
between the stages of communication. Not work normally focused on an individual and
every protocol maps perfectly to the OSI devices associated with him or her. Devices can
model, as there is some overlap within some include laptops, desktops, and PDAs.
the layers of some protocols.
paper pickup roller A D-shaped roller that
Overclocking A method of physically forcing rotates against the paper and pushes one sheet
a computer to perform at a faster speed then into the printer.
originally designed. Overclocking often
involves increasing voltages and clock speeds paper registration roller A roller in an EP
for processor. process printer that keeps paper movement in
sync with the EP image formation process.
packet filtering A method of controlling
traffic to a network through analyzing paper transport assembly The set of
incoming and outgoing data packets. devices that moves the paper through the
printer. It consists of a motor and several rub- passive-matrix screen An LCD display
berized rollers that each perform a different mechanism that uses a transistor to control
function. every row of pixels on the screen. This is in
sharp contrast to active-matrix screens, where
parallel port An input/output port that man- each individual pixel is controlled by its own
ages information 8 bits at a time, often used to transistor.
connect a parallel printer.
password In order to identify themselves on
parallel processing A processor architecture the network, each user must provide two cre-
where a processor essentially contains two pro- dentials—a username and a password. The
cessors in one. The processor can then execute username says, “This is who I am,” and the
more than one instruction per clock cycle. password says, “And here’s proof!” Passwords
parity Parity is a simple form of error checking are case sensitive and should be kept secret
used in computers and telecommunications. from other users on the network.
Parity works by adding an additional bit to a Patch A small file created to repair or add
binary number and using it to indicate any a feature to an existing program. Patches are
changes in that number during transmission. installed and normally replace existing files
parity RAM An error-assessing method used within a program or operating system.
to analyze RAM based on adding an extra path When referring to a file on a computer’s
data bit. Parity is used to ensure the validity of hard drive, the path is used to describe where it
the data. exists within the directory structure. If a file is
partition A portion of a hard disk that the on the D drive in a folder named TEST, its path
operating system treats as a separate drive. is d:\test\.
partition table In DOS, an area of the hard PC Card A PC Card, also known as a
disk containing information on how the disk is PCMCIA card or a “credit card adapter” is a
organized. The partition table also contains peripheral device that uses the PCMCIA speci-
information that tells the computer which fication. These have the advantage of being
operating system to load; most disks will con- small, easy to use and fully plug-and-play
tain DOS, but some users may divide their hard compliant.
disk into different partitions, or areas, each PC Card slot An opening in the case of a por-
containing a different operating system. The table computer intended to receive a PC Card;
partition table indicates which of these parti- also known as a PCMCIA slot.
tions is the active partition, the partition that
should be used to start the computer. PC Card Socket Services See socket
services.
passive hub Type of hub that electrically
connects all network ports together. This type PCB See printed-circuit board (PCB).
of hub is not powered.
PCONSOLE Novell NetWare version 3 used dual pipelining that allow the Pentium to
PCONSOLE to setup and manage printers on a execute more than one instruction per clock
NetWare server. cycle.
PC-DOS 1 Microsoft’s Disk Operating Pentium Pro The 32-bit Pentium Pro (also
System is generally referred to as MS-DOS. known as the P6) has a 64-bit data path between
When it was packaged with IBM’s personal the processor and cache and is capable of run-
computers, though, DOS was modified slightly ning at clock speeds up to 200MHz. Unlike the
and was called PC-DOS. Pentium, the Pentium Pro has its secondary cache
built into the CPU itself, rather than on the moth-
PCI Abbreviation for Peripheral Component erboard, meaning that it accesses cache at
Interconnect. A specification introduced by internal speed, not bus speed.
Intel that defines a local bus that allows up to
10 PCI-compliant expansion cards to be peripheral Any hardware device attached
plugged into the computer. One of these 10 to and controlled by a computer, such as a
cards must be the PCI controller card, but the monitor, keyboard, hard disk, floppy disk,
others can include a video card, network inter- CD-ROM drives, printer, mouse, tape drive,
face card, SCSI interface, or any other basic and joystick.
input/output function. The PCI controller
exchanges information with the computer’s Peripheral Component Interconnect
processor as 32- or 64-bits and allows intelli- See PCI.
gent PCI adapters to perform certain tasks con- permanent swap file A permanent swap file
currently with the main processor by using bus allows Microsoft Windows to write informa-
mastering techniques. tion to a known place on the hard disk, which
PCMCIA Abbreviation for PC Memory Card enhances performance over using conventional
International Association. Expansion cards methods with a temporary swap file. The Win-
developed for this standard are now called dows permanent swap file consists of a large
PC Cards. number of consecutive contiguous clusters; it
is often the largest single file on the hard disk,
peer-to-peer network Network where the and of course this disk space cannot be used
computers act as both workstations and servers by any other application.
and where there is no centralized administra-
tion or control. PFA (Predictive Failure Analysis) The use
of software and hardware tools to create an
Pentium The Pentium represents the evolu- objective opinion and prediction as to the life
tion of the 80486 family of microprocessors cycle of a computer component.
and adds several notable features, including
8K instruction code and data caches, built-in PGA (Pin Grid Array) A type of IC package
floating-point processor and memory manage- that consists of a grid of pins connected to a
ment unit, as well as a superscalar design and square, flat package.
photosensitive drum See EP drum. is the base on which all other software sits. As
such the OS is the “platform” that applications
physical disk A disk that exists within or and utilities run on.
attached to a computer; hardware.
Plug and Play (PnP) A standard that defines
Physical layer The first and lowest of the automatic techniques designed to make PC
seven layers in the International Organization for configuration simple and straightforward.
Standardization’s Open Systems Interconnection
(ISO/OSI) model for computer-to-computer Point-to-Point Protocol (PPP) The protocol
communications. The Physical layer defines the used with dial-up connections to the Internet.
physical, electrical, mechanical, and functional Its functions include error control, security,
procedures used to connect the equipment. dynamic IP addressing, and support for mul-
tiple protocols.
physical topology How the cables on a net-
work are physically arranged. Possible config- POP See Post Office Protocol.
urations include star, ring, mesh, and hybrid
topologies. POST See power on self-test (POST).
pickup roller See paper pickup roller. Post Office Protocol v 3 (POP3) POP3 is
used to accept and store e-mail and to allow
Pin Grid Array See PGA (Pin Grid Array). users to connect to their mailbox and access
their mail. SMTP is used to send mail to the
PING Packet Internet Groper utility, a POP3 server.
command-line utility used to see if another host
on the network is reachable and responsive. PostScript A page-description language used
It works by sending out packets to another host when printing high-quality text and graphics.
on the network and waits for a reply. Desktop publishing or illustration programs
that create PostScript output can print on any
PING of death A DoS attack in which a large PostScript printer or imagesetter, because Post-
ICMP packet is sent to overflow a remote Script is hardware-independent. An interpreter
host’s buffer, which causes the remote host in the printer translates the PostScript com-
to reboot or hang. mands into commands that the printer can
plan maintenance The process of revisiting understand.
a created plan to add updates, repair known potentiometer See variable resistor.
problem areas, or adjust the listed procedures
as seen fit. power on self-test (POST) A set of diag-
nostic programs, loaded automatically from
planar board See motherboard. ROM BIOS during startup, designed to ensure
platform An operating system (OS) is the that the major system components are present
basic software that runs on a computer, and it and operating. If a problem is found, the POST
software writes an error message in the screen, Standardization’s Open Systems Interconnec-
sometimes with a diagnostic code number indi- tion (ISO/OSI) model for computer-to-computer
cating the type of fault located. These POST communications. The Presentation layer defines
tests execute before any attempt is made to load the way that data is formatted, presented, con-
the operating system. verted, and encoded.
power spike A sudden and brief surge in preventative maintenance The process of
electrical current. performing various procedures on a computer
to prevent future data loss or system downtime.
power supply A part of the computer that
converts the power from a wall outlet into the primary DOS partition In DOS, a division
lower voltages, typically 5 to 12 volts DC, of the hard disk that contains important oper-
required internally in the computer. ating system files. A DOS hard disk can be
divided into two partitions, or areas: the pri-
power surge A brief but sudden increase in mary DOS partition and the extended DOS
line voltage, often destructive, usually caused partition. If you want to start your computer
by a nearby electrical appliance (such as a from the hard disk, the disk must contain an
photocopier or elevator) or when power is active primary DOS partition that includes the
reapplied after an outage. three DOS system files: MSDOS.SYS, IO.SYS,
power users A power user is someone who and COMMAND.COM. The primary DOS
either does administrative-level tasks on their partition on the first hard disk in the system is
machine or needs to have additional access to referred to as drive C. Disk partitions are dis-
the system to do their work. The Power Users played, created, and changed using the FDISK
group on a Windows 2000 Professional station command.
has abilities somewhere between normal users print consumables Products that a printer
and administrators. uses in the print process that must be replaced
preemptive multitasking A form of multi- occasionally. Examples include toner, ink, rib-
tasking where the operating system executes bons, and paper.
an application for a specific period of time, printed-circuit board (PCB) Any flat board
according to its assigned priority and need. made of plastic or fiberglass that contains chips
At that time, it is preempted and another task and other electronic components. Many PCBs
is given access to the CPU for its allocated time. are multilayer boards with several different sets
Although an application can give up control of copper traces connecting components
before its time is up, such as during input/ together.
output waits, no task is ever allowed to execute
for longer than its allotted time period. printer control assembly Large circuit
board in the printer that converts signals from
Presentation layer The sixth of seven the computer into signals for the various parts
layers of the International Organization for in the laser printer.
printer ribbon A fabric strip that is impreg- Program Manager The primary interface to
nated with ink and wrapped around two spools Windows that allows you to organize and exe-
encased in a cartridge. This cartridge is used in cute numerous programs by double-clicking an
dot-matrix printers to provide the ink for the icon in a single graphical window.
print process.
programmable read-only memory (PROM)
printhead That part of a printer that creates A memory chip to which data can be written only
the printed image. In a dot-matrix printer, the once. Data is written to a PROM chip with a
printhead contains the small pins that strike the device called a PROM programmer or PROM
ribbon to create the image, and in an ink-jet burner.
printer, the printhead contains the jets used to
create the ink droplets as well as the ink reser- proprietary design A motherboard design
voirs. A laser printer creates images using an that is unique to a particular manufacturer and
electrophotographic method similar to that is not licensed to other manufacturers.
found in photocopiers and does not have a protected mode A processor operating
printhead. mode where every program’s memory is pro-
print media Another name for the mediums tected from every other program so that if one
being printed on. Examples include paper, program crashes, it doesn’t bring down the
transparencies, and labels. other programs.
proxy servers A server that performs tasks RAID Redundant Array of Inexpensive (or
on behalf of the private network. In using a Independent) Disks. RAID refers to a system
proxy server, you are protecting the private setup that uses multiple drives and writes data
IP addresses of the internal network. Only the across all disks in a predefined order. RAID is
address of the proxy server is given out onto based on levels and data being mirrored or
the public network. striped.
PS/2 mouse interface A type of mouse RAM Acronym for random access memory.
interface that uses a round, DIN-6 connector The main system memory in a computer, used
that gets its name from the first computer it was for the operating system, application programs,
introduced on, the IBM PS/2. and data.
puck The proper name for the mouse-like RAM disk An area of memory managed by
device used with drawing tablets. a special device driver and used as a simu-
lated disk. Anything stored on a RAM disk
QSOP (Quad Small Outline Package) A will be erased when the computer is turned
type of IC package that has all leads soldered off, so the contents must be saved onto a
directly to the circuit board. Also called a real disk.
“surface mount” chip.
Rambus Inline Memory Modules (RIMMs)
Queue A temporary holding structure in A type of memory module that uses Rambus
which values can be stored until needed. memory. See Direct Rambus.
Queues are organized in such a way that
the first item received is also the first item random access memory See RAM.
addressed. Queues are used in printers.
rasterizing The process of converting signals
Quick-and-Dirty Disk Operating System from the computer into signals for the various
(QDOS) Created by Tim Patterson of Seattle assemblies in the laser printer.
Computer Products, QDOS was the basis of
MS-DOS. QDOS was purchased by Microsoft readme file a basic text document created
and renamed MS-DOS. by a product manufacturer, which contains
information on a specific product. Readme files
Rack A storage structure to which compo- often contain last minute changes and impor-
nents can be bolted for security. Typical racks tant information regarding the operation of
are 19" wide. the product.
radio frequency interference (RFI) Many read-only memory See ROM (read-only
electronic devices, including computers and memory).
peripherals, can interfere with other signals in
the radio-frequency range by producing electro- read/write head That part of a floppy- or
magnetic radiation; this is normally regulated hard-disk system that reads and writes data to
by government agencies in each country. and from a magnetic disk.
real mode A processor operating mode have an individual password and user account
whereby a processor emulates an 8086 processor. for authentication.
rheostat See variable resistor. their destination. Routers link local area net-
work segments at the network layer of the Inter-
ribbon cartridge The container that holds national Organization for Standardization’s
the printer ribbon. Open Systems Interconnect (ISO/OSI) model
ring topology Type of physical topology in for computer-to-computer communications.
which each computer connects to two other Routing Information Protocol (RIP) for
computers, joining them in a circle and creating IPX A distance-vectoring route-discovery
a unidirectional path where messages move protocol used by IPX. It uses hops and tics to
from workstation to workstation. Each entity determine the cost for a particular route. The
participating in the ring reads a message, regen- path with the fewest hops (each router involved
erates it, and then hands it to its neighbor. in a path is one hop) is used when sending data.
RJ-11/RJ-45 A commonly used modular RS-232-C In asynchronous transmissions, a
telephone connector. RJ-11 is a four- or six-pin recommended standard interface established
connector used in most connections destined by the Electrical Industries Association. The
for voice use; it is the connector used on phone standard defines the specific lines, timing, and
cords. RJ-45 is the eight-pin connector used for signal characteristics used between the com-
data transmission over twisted-pair wiring and puter and the peripheral device and uses a 25-
can be used for networking; RJ-45 is the con- pin or 9-pin DB connector. RS-232-C is used
nector used on 10Base-T Ethernet cables. for serial communications between a computer
RLL encoding See run-length limited (RLL) and a peripheral such as a printer, modem,
encoding. digitizing tablet, or mouse.
ROM (read-only memory) A type of com- RS-232 cables See serial cables.
puter memory that retains its data perma- RS-422/423/449 In asynchronous transmis-
nently, even when power is removed. Once the sions, a recommended standard interface estab-
data is written to this type of memory, it cannot lished by the Electrical Industries Association
be changed. for distances greater than 50 feet but less than
root directory In a hierarchical directory 1000 feet. The standard defines the specific
structure, the directory from which all other lines, timing, and signal characteristics used
directories must branch. The root directory is between the computer and the peripheral device.
created by the FORMAT command and can RTS Abbreviation for request to send. A
contain files as well as other directories. This hardware signal defined by the RS-232-C stan-
directory cannot be deleted. dard to request permission to transmit.
router In networking, an intelligent con- run-length limited (RLL) encoding An effi-
necting device that can send packets to the cor- cient method of storing information on a hard
rect local area network segment to take them to
disk that effectively doubles the storage capacity committee. SCSI is used to connect a personal
of a disk when compared to older, less efficient computer to several peripheral devices using
methods such as modified frequency modula- just one port. Devices connected in this way are
tion encoding (MFM). said to be “daisy-chained” together, and each
device must have a unique identifier or priority
SAN (storage area network) Provides a number.
link for multiple users to a mass storage loca-
tion. Large corporations that prefer to cen- SCSI adapter Device that is used to manage
tralize their data commonly use SANs. This all the devices on the SCSI bus as well as to send
assists in data safety and backups because and retrieve data from the devices.
all the file servers can be housed in a secure
building with limited access. SANs normally SCSI address A unique address given to each
use high-speed links such as fiber optics to com- SCSI device.
municate with their storage facilities. SCSI bus Another name for the SCSI inter-
Safe Mode A Windows 9x operating mode face and communications protocol.
that only loads a basic set of drivers and a basic SCSI chain All the devices connected to a
screen resolution. It can be activated using the single SCSI adapter.
F8 key at boot time.
SCSI ID A unique number that identifies a
Sbackup A backup utility provided in Novell SCSI device on a SCSI chain. Every device on
NetWare versions 3 and 4. a SCSI chain must have a unique SCSI ID. SCSI
scanner An optical device used to digitize ID can be configured through jumpers on the
images such as line art or photographs, so that device or through a software utility.
they can be merged with text by a page-layout SCSI terminator The SCSI interface must be
or desktop publishing program or incorporated correctly terminated to prevent signals echoing
into a CAD drawing. on the bus. Many SCSI devices have built-in
screen saver Program originally designed to terminators that engage when they are needed.
prevent damage to a computer monitor from With some older SCSI devices, you have to add
being left on too long. These programs usually an external SCSI terminator that plugs into the
include moving graphics so that no one pixel is device’s SCSI connector.
left on all the time. Screen savers detect com- SCSI-1 High-speed parallel interface stan-
puter inactivity and activate after a certain dard that supports up to eight devices both
period. internal and external, and provides a generic
SCSI Acronym for small computer interface for devices such as scanners, CD-
system interface. A high-speed, system-level ROMs, and other disks. SCSI-1 had a transfer
parallel interface defined by the ANSI X3T9.2 speed of 5MBps.
SCSI-2 A SCSI interface that supports server In networking, any computer that
transfer rates of up to 10MBps. Supports drives makes access to files, printing, communica-
of 3GB or more. tions, or other services available to users of the
network. In large networks, a server may run a
sector The smallest unit of storage on a disk, special network operating system; in smaller
usually 512 bytes. Sectors are grouped together installations, a server may run a personal
into clusters. computer operating system.
seek time Time it takes the actuator arm to service pack A Microsoft term for an
move from rest position to active position for operating system update distributed to repair
the read/write head to access the information. known issues as well as to update features.
Often used as a performance gauge of an indi- Novell uses the term consolidated support pack
vidual drive. The major part of a hard disk’s for its collection of patches and fixes. Sun
access time is actually seek time. Microsystems provides Patch Clusters for its
semiconductors Any material that, operating system.
depending on some condition, is either a con- server uptime A measure of the time that a
ductor or nonconductor. server has been successfully operating since its
serial cables Cables used for serial commu- last shutdown.
nications. See serial communications. service A service is any program that runs in
serial communications The transmission the background on a computer and performs
of information from computer to computer or some sort of task for that computer or other
from computer to a peripheral, one bit at a time. machines on the network.
Serial communications can be synchronous and Session layer The fifth of seven layers of the
controlled by a clock or asynchronous and International Organization for Standardiza-
coordinated by start and stop bits embedded in tion’s Open Systems Interconnection (ISO/OSI)
the data stream. model for computer-to-computer communica-
serial mouse A mouse that attaches directly tions. The Session layer coordinates communi-
to one of the computer’s serial ports. cations and maintains the session for as long as
it is needed, performing security, logging, and
serial port A computer input/output port administrative functions.
that supports serial communications in which
information is processed one bit at a time. share name The share name is used to iden-
RS-232-C is a common serial protocol used tify a network access point. Share names can be
by computers when communicating with the same as the directory they are sharing or
modems, printers, mice, and other peripherals. they can be different.
serial printer A printer that attaches to one shell Every operating system needs to have
of the computer’s serial ports. some sort of interface that allows users to
navigate the system. The shell is the program slave drive The secondary drive in a IDE
that controls how this interface works. For MS- master/slave disk configuration.
DOS, the Windows Program Manager was its
most popular shell. For Windows 9x and 2000, slot A motherboard receptacle into which
Explorer (explorer.exe) is the standard shell compatible expansion cards or processors can
program. be inserted.
shielded twisted-pair See STP (shield small computer system interface See SCSI.
twisted-pair). SMART (Self-Monitoring Analysis and
single-ended (SE) A communication Reporting Technology) A drive technology
method used by SCSI devices. Most SCSI that monitors its own performance and warns
devices use normal SE signaling, which limits the operating system (and user) of possible
the maximum length of a SCSI bus to 1.5m future failure.
(4.9'). This includes most 50-pin (narrow) SCSI Smart card A plastic card (similar in size
devices such as scanners and Zip drives. to a credit card) that has memory for storing
Single Inline Memory Module (SIMM) information and possibly an embedded
Individual RAM chips are soldered or surface microprocessor. These cards are prepro-
mounted onto small narrow circuit boards grammed and can provide access to a secure
called carrier modules, which can be plugged location and also can electronically monitor
into sockets on the motherboard. These carrier access.
modules are simple to install and occupy less SMTP (Simple Mail Transfer Protocol) An
space than conventional memory modules. Application level protocol used to send
Single Inline Package (SIP) A type of semi- e-mail messages from one mail server to
conductor package where the package has a another. When you configure your e-mail
single row of connector pins on one side only. application, you have to specify the SMTP
server that your e-mail application will be
single mode A fiber optic transmission sending mail to.
method that used a laser as the light source.
SNMP (Simple Network Management
single-purpose server A server that is dedi- Protocol) A set of protocols used for col-
cated to one purpose (e.g., a file server or a lecting information about a network. SNMP
printer server). agents are network devices such as computers,
routers, and bridges that gather information
site license A software license that is valid about themselves and return the information
for all installations at a single site. to a system running an SNMP management
Slackware Slackware is a free version of program.
Linux created by Patrick Volkerding.
socket services Part of the software support source All computer programs—operating
needed for PCMCIA hardware devices in a system or application—are nothing but a col-
portable computer, controlling the interface lection of program code. This is the source code
to the hardware. Socket services is the lowest or “source” that defines what a program is and
layer in the software that manages PCMCIA how it works. The open source movement is
cards. It provides a BIOS-level software inter- involved with allowing you to see and even
face to the hardware, effectively hiding the modify this code.
specific details from higher levels of software.
Socket services also detect when you insert or spin speed An indication of how fast the
remove a PCMCIA card and identify the type platters on a fixed disk are spinning.
of card it is. spindle The rod that platters are mounted on
socket A motherboard receptacle into which to in a hard disk drive.
a compatible processor can be inserted. Some spool file A file used to temporarily store
motherboard designs use slots instead, and you data for later processing (usually for printing).
need to match the chip to the motherboard. (A Also called a queue.
second meaning of socket is a Unix term for a
software object that connects an application to spooling Writing data to a queue (spool file)
a network protocol.) for later processing. This allows the computer
to continue its normal operations.
software An application program or an
operating system that a computer can execute. SPS (standby power supply) A power
Software is a broad term that can imply one or backup device that will provide battery power
many programs, and it can also refer to appli- to a computer in the event of a power failure.
cations that may actually consist of more than SPS software provides additional features. A
one program. server should be protected by an SPS to prevent
hard shutdowns.
software driver Software that acts as the
liaison between a piece of hardware and the SRAM See static RAM (SRAM).
operating system and allows the use of a
component. star topology A network design with a cen-
tral connectivity device (hub, switch, or MAU)
software patch A small program created to to which all other devices connect.
repair or update a program. Software patches
are normally installed through an executable start/stop cycle Every time power is applied
file but can also be installed manually or copied to a hard disk, the platters spin up to their set
over existing files. RPM. This process creates a cushion of air that
then lifts up the read/write heads off of the
solenoid An electromechanical device that, surface of the platter. When the power is shut
when activated, produces an instant push or down, the air cushion is lost and the heads fall
pull force. back down to rest on the platters. Every time
this start/stop cycle occurs, some wear is from the paper after the toner has been trans-
applied to the platters. Normally a hard disk ferred to the paper.
will have a minimum of 30,000 to 50,000 start/
stop cycles in its lifetime. static strap See ESD wrist strap.
ST506 interface A popular hard-disk inter- stepper motor A very precise motor that can
face standard developed by Seagate Technolo- move in very small increments. Often used in
gies, first used in IBM’s PC/XT computer and printers.
still popular today, with disk capacities smaller stepping Stepping is similar to version num-
than about 40MB. ST506 has a relatively slow bers: as updates are made to chips, the version
data transfer rate of 5 megabits per second. numbers change. You’ll want to consider pro-
stack Another name for the memory map, or cessor stepping particularly when upgrading
the way memory is laid out. a single processor system to a multiprocessor
one. Mixing processor steppings does not
standard peripheral power connector always work well. One stepping (revision)
Type of connector used to power various between CPUs is acceptable.
internal drives. Also called a Molex connector.
stop bit(s) In asynchronous transmissions,
star network A network topology in the stop bits are transmitted to indicate the end
form of a star. At the center of the star is a of the current data word. Depending on the
wiring hub or concentrator, and the nodes or convention in use, one or two stop bits
workstations are arranged around the central are used.
point representing the points of the star.
STP (shield twisted-pair) Cabling that has a
start bit In asynchronous transmissions, a braided foil shield around the twisted pairs of
start bit is transmitted to indicate the beginning wire to decrease electrical interference.
of a new data word.
stylus A pen-like pointing device used in pen-
Start menu As the main focus of the Win- based systems and personal digital assistants.
dows 9x/NT/2000 user interface, the Start
menu allows program shortcuts to be placed subnet mask The subnet mask is a required
for easy and organized access. part of any TCP/IP configuration, and it is used
to define which addresses are local and which
static RAM (SRAM) A type of computer are on remote networks.
memory that retains its contents as long as
power is supplied. It does not need constant Sun Solaris A Unix-based operating envi-
refreshment like dynamic RAM chips. ronment created by Sun Microsystems.
SuperVGA (SVGA) An enhancement to the switchbox A device that allows the user
Video Graphics Array (VGA) video standard to manually or automatically switch between
defined by the Video Electronics Standards two or more devices. An economical alterna-
Association (VESA). tive to a KVM, switchboxes work well for
monitors.
surface mount See Quad Small Outline
Package (QSOP). SYN flood A denial of service attack in which
the hacker sends a barrage of SYN packets. The
surge protector A device that protects sensi- receiving station tries to respond to each SYN
tive equipment from electrical surges or spikes request for a connection, exhausting all the
through the use of a breaker. server resources. All incoming connections are
surge suppressor Also known as a surge rejected until all current connections can be
protector. A regulating device placed between established.
the computer and the AC line connection that synchronization The timing of separate
protects the computer system from power elements or events to occur simultaneously.
surges. 1. In a multimedia presentation, synchroniza-
SVGA See SuperVGA (SVGA). tion ensures that the audio and video compo-
nents are timed correctly, so they actually make
swap file On a hard disk, a file used to store sense. 2. In computer-to-computer communi-
parts of running programs that have been cations, the hardware and must be synchro-
swapped out of memory temporarily to make nized so that file transfers can take place. 3. The
room for other running programs. A swap file process of updating files on both a portable
may be permanent, always occupying the same computer and a desktop system so that they
amount of hard disk space even though the both have the latest versions is also known as
application that created it may not be running, synchronization.
or is temporary, only created as and when
needed. synchronous DRAM A type of DRAM
memory module that uses memory chips syn-
swipe card Digitally-encoded card used to chronized to the speed of the processor.
control access to a restricted area. Access can
be granted or denied, as well as monitored. synchronous transmission In communica-
tions, a transmission method that uses a clock
switch Often referred to as an intelligent signal to regulate data flow. Synchronous
hub. The benefit of a switch over a hub is that transmissions do not use start and stop bits.
a switch will read the information coming
inbound and, based on the address located in syntax Syntax is a term used to describe the
the data header, the switch will send the infor- proper way of forming a text command for
mation out on the receiving addressed port. entry into the computer. Many commands have
This eliminates the network wide propagation a number of different options, each of which
that occurs with a hub. requires a particular format.
Syscon A command line utility used to con- utility programs, as distinct from an application
trol a NetWare 3.x server, especially for user program.
administration of the Bindery (configuration,
trust, login, and account information). Systems Management Server (SMS)
Microsoft server software used to manage
system attribute Attribute of DOS that is network resources; includes wake-on-LAN
used to tell the OS that this file is needed by the capability.
OS and should not be deleted. Marks a file as
part of the operating system and will also tabs On many windows you will find that, to
protect the file from deletion. save space, a single window will have many
tabs, each of which can be selected to display
system board The sturdy sheet or board to particular information.
which all other components on the computer
are attached. These components consist of the tape cartridge A self-contained tape storage
CPU, underlying circuitry, expansion slots, module, containing tape much like that in a
video components, and RAM slots, just to video cassette. Tape cartridges are primarily
name a few. Also known as a logic board, used to back up hard disk systems.
motherboard, or planar board. tape drive Removable media drive that uses
system disk A disk that contains all the a tape cartridge that has a long polyester ribbon
files necessary to boot and start the operating coated with magnetic oxide and wrapped
system. In most computers, the hard disk around two spools with a read/ write head in
is the system disk; indeed, many modern between.
operating systems are too large to run from Tar A Unix backup utility included in the
floppy disk. operating system. (Short for Tape ARchive.)
SYSTEM.INI In Microsoft Windows, an ini- target Another name for the backup media,
tialization file that contains information on it is the destination for the data being backed
your hardware and the internal Windows oper- up. It is usually a tape drive or other backup
ating environment. device.
System Monitor A Windows 2000 utility taskbar The area of the Windows 9x/NT/
that monitors current system performance 2000 interface which includes the Start button
through the use of counters. and the System Tray, as well as icons for any
system resources On a Windows 3.x or open programs.
95/98 machine, the system resources represent TCP/IP Acronym for Transmission Control
those components of the PC that are being used Protocol/Internet Protocol. A set of computer-
(memory, CPU, etc.). to-computer communications protocols that
system software The programs that make up encompass media access, packet transport,
the operating system, along with the associated session communications, file transfer, e-mail,
and terminal emulation. TCP/IP is supported terminator A device attached to the last
by a very large number of hardware and peripheral in a series or the last node on a net-
software vendors and is available on many work. A resistor is placed at both ends of a coax
different computers from PCs to mainframes. Ethernet cable to prevent signals from reflecting
and interfering with the transmission.
technical support CD A compact disc pro-
vided by a product manufacturer, which con- text mode A video display mode for a
tains product support, updates, and/or product video card that allows it to only display text.
patches/fixes. When running DOS programs, a video card is
in text mode.
Telnet A protocol that functions at the Appli-
cation layer of the OSI Model providing ter- thermal printer A nonimpact printer that
minal emulation capabilities. uses a thermal printhead and specially treated
paper to create an image.
temporary swap file A swap file that is cre-
ated every time it is needed. A temporary swap thick Ethernet Connecting coaxial cable
file will not consist of a single large area of con- used on an Ethernet network. The cable is 1 cm
tiguous hard disk space, but may consist of sev- (approximately 0.4") thick and can be used
eral discontinuous pieces of space. By its very to connect network nodes up to a distance of
nature, a temporary swap file does not occupy approximately 3300 feet. Thick Ethernet is
valuable hard disk space if the application that primarily used for facility-wide installations.
created it is not running. In a permanent swap Also known as 10Base5.
file the hard disk space is always reserved and is
therefore unavailable to any other application thin Ethernet Connecting coaxial cable
program. used on an Ethernet network. The cable is
5 mm (approximately 0.2") thick, and can be
terminal A monitor and keyboard attached used to connect network nodes up to a distance
to a computer (usually a mainframe), used for of approximately 1000 feet. Thin Ethernet is
data entry and display. Unlike a personal com- primarily used for office installations. Also
puter, a terminal does not have its own central known as 10Base2.
processing unit or hard disk.
thrashing A slang term for the condition that
Terminate and Stay Resident (TSR) A occurs when Windows must constantly swap
DOS program that stays loaded in memory, data between memory and hard disk. The hard
even when it is not actually running, so that disk spins continuously during this and makes a
you can invoke it very quickly to perform a lot of noise.
specific task.
threshold An attribute level that is set as
Termination The use of a resistor at the end a cut-off point between significant (critical)
of a cable to prevent signals from bouncing and nonsignificant events or conditions. These
back on the wire. Terminators are used in SCSI might include temperature, electrical current,
chains as well as coaxial cable networks. CPU load, RAM use, and free hard disk space.
token passing A media access method that electrically charged ink to be fused to the paper
gives every NIC equal access to the cable. The during printing.
token is a special packet of data that is passed
from computer to computer. Any computer topology A way of laying out a network. Can
that wants to transmit has to wait until it has describe either the logical or physical layout.
the token, at which point it can add its own touch screen A special monitor that lets the
data to the token and send it on. user make choices by touching icons or graph-
Token Ring network A local area network ical buttons on the screen.
with a ring structure that uses token-passing Tower A vertical computer case design. Tower
to regulate traffic on the network and avoid style computers are the most common in use
collisions. On a Token Ring network, the con- today.
trolling computer generates a “token” that
controls the right to transmit. This token is Tower of Hanoi A complex tape backup
continuously passed from one node to the next rotation used within a carefully planned and
around the network. When a node has informa- deployed strategy. To maintain the required his-
tion to transmit, it captures the token, sets its tory of file versions, a minimum of five media
status to busy, and adds the message and the sets should be used in the weekly rotation
destination address. All other nodes continu- schedule, or eight for a daily rotation scheme.
ously read the token to determine if they are the
recipient of a message; if they are, they collect Trace Log Windows Trace Logs Monitor
the token, extract the message, and return the continuously traces server changes and actions.
token to the sender. The sender then removes This information is useful in locating the source
the message and sets the token status to free, of potential problems. If a program action is
indicating that it can be used by the next node responsible for server stress, then the Trace Log
in sequence. will help you track the program’s utilization of
server resources.
tolerance band Found on a fixed resistor,
this colored band indicates how well the Tracert Used to trace the path of a packet
resistor holds to its rated value. across a TCP/IP network.
toner Black carbon substance mixed with trackball An input device used for pointing,
polyester resins and iron oxide particles. designed as an alternative to the mouse.
During the EP printing process, toner is first
tracks The concentric circle unit of hard disk
attracted to areas that have been exposed to the
division. A disk platter is divided into these
laser in laser printers and is later deposited and
concentric circles.
melted onto the print medium.
transfer corona assembly The part of an EP
toner cartridge The replaceable cartridge in
process printer that is responsible for transferring
a laser printer or photocopier that contains the
the developed image from the EP drum to the that combines the transmitting and receiving
paper. circuitry needed for asynchronous transmission
over a serial line. Asynchronous transmissions
transfer step The step in the EP print process use start and stop bits encoded in the data
where the developed toner image on the EP stream to coordinate communications rather
drum is transferred to the print media using the than the clock pulse found in synchronous
transfer corona. transmissions.
transistor Abbreviation for transfer resistor. UDP (user datagram protocol)
A semiconductor component that acts like a
switch, controlling the flow of an electric cur- User Datagram Protocol UDP (User Data-
rent. A small voltage applied at one pole con- gram Protocol) is a Transport level transmis-
trols a larger voltage on the other poles. sion protocol that is similar to TCP but it does
Transistors are incorporated into modern not provide reliable delivery of data between
microprocessors by the million. hosts.
Transmission Control Protocol/Internet Ultra DMA IDE Also known as ATA version
Protocol See TCP/IP. 4 (ATA-4), it can transfer data at 33Mbps, so it
is also commonly seen in motherboard specifi-
Transport layer The fourth of seven layers of cations as Ultra DMA/33, Ultra 66, or UDMA.
the International Organization for Standard-
ization’s Open Systems Interconnection (ISO/ Ultra 160 A SCSI 3 release, it is a parallel
OSI) model for computer-to-computer commu- interface that uses a 16-bit wide bus, LVD sig-
nications. The Transport layer defines proto- naling and termination, and has a maximum
cols for message structure and supervises the transfer speed of 160MBps.
validity of the transmission by performing
some error checking. Ultra-3 SCSI This is the latest SCSI standard.
Ultra-3, also called Ultra SCSI, supports data
Troubleshooting The methodical and sys- throughput of 20–40MBps.
tematic process of locating and repairing prob-
lems that occur. Ultra 320 SCSI Ultra 320 is the next genera-
tion of parallel SCSI interface. At one point it
TSR See Terminate and Stay Resident (TSR). was called SCSI Ultra-4. It is a 16-bit wide bus
that uses LVD signaling, LVD termination, a
twisted-pair cable Cable that comprises two Centronics 68-pin connector, and has a
insulated wires twisted together at six twists transfer speed of 320MBps.
per inch. In twisted-pair cable, one wire carries
the signal and the other is grounded. Telephone uninstall To remove a program from a
wire installed in modern buildings is often computer. This generally involves removing its
twisted-pair wiring. configuration information from the Registry,
its icons from the Start menu, and its program
UART Acronym for Universal Asynchronous code from the file system.
Receiver/Transmitter. An electronic module
Universal Serial Bus See USB. the appropriate commands to the other core
components.
Unix/Linux A 32-bit, multiuser, multitasking,
portable operating system, Unix/Linux is by far user forum A website on the Internet, similar
the least expensive NOS. In many circles it is free to a chat room, where people can share infoma-
for distribution or a nominal fee is charged. Due tion on a particular topic. People can also post
to its open architecture, it can then be reengi- and reply to messages and surveys. Also see
neered to best meet your business needs. knowledge base.
vaccine programmers do the best they can to colors. In contrast, a digital display can only
catch up, but they must always lag behind resolve a finite range of shades or colors.
to some extent.
video adapter An expansion board that
vacuum tube Electronic component that is a plugs into the expansion bus in a DOS com-
glorified switch. A small voltage at one pole puter and provides for text and graphics output
switches a larger voltage at the other poles on to the monitor. The adapter converts the text
or off. and graphic signals into several instructions for
the display that tell it how to draw the graphic.
variable resistor A resistor that does not
have a fixed value. Typically the value is Video Graphics Array See VGA.
changed using a knob or slider.
video RAM (VRAM) Special-purpose RAM
verification In the context of backup secu- with two data paths for access, rather than just
rity, verification compares the original data one as in conventional RAM. These two paths
and the data copy on the media to ensure that let a VRAM board manage two functions at
they are the same. This feature is built into once—refreshing the display and communi-
comprehensive backup software. cating with the processor. VRAM doesn’t
require the system to complete one function
version Each time that computer software is before starting the other, so it allows faster
modified, new features are added and old prob- operation for the whole video system.
lems are, hopefully, fixed. To tell these modi-
fied programs apart, computer programmers virtual memory A memory-management
use versions. These are incremented by one technique that allows information in physical
digit (for example, from 1.0 to 2.0) for major memory to be swapped out to a hard disk. This
revisions, or by a tenth of a digit (for example, technique provides application programs with
from 2.0 to 2.1) for minor modifications. more memory space than is actually available
Higher version numbers mean newer versions. in the computer. True virtual-memory manage-
ment requires specialized hardware in the pro-
VGA Acronym for Video Graphics Array. cessor for the operating system to use; it is not
A video adapter. VGA supports previous just a question of writing information out to a
graphics standards, and provides several swap file on the hard disk at the application level.
different graphics resolutions, including
640 pixels horizontally by 480 pixels vertically. virus A program intended to damage your
A maximum of 256 colors can be displayed computer system without your knowledge or
simultaneously, chosen from a palette of permission. A virus may attach itself to another
262,114 colors. Because the VGA standard program or to the partition table or the boot
requires an analog display, it is capable of track on your hard disk. When a certain event
resolving a continuous range of gray shades or occurs, a date passes, or a specific program
executes, the virus is triggered into action. Not VRM (voltage regulator module) A small
all viruses are harmful; some are just annoying. card that is used in a multiprocessor mother-
board in the empty processor slots until needed
virus protection suite Offers centralized by additional processors.
protection and updates. The real benefit of a
virus protection suite occurs when the time to wait state A clock cycle during which no
update the virus definition comes. Virus protec- instructions are executed because the processor
tion suites are updated on the server, which is waiting for data from a device or from
then monitors the rest of the network. memory.
VL bus Also known as VL local bus. Abbrevi- wake-on-LAN (WOL) Technology that
ation for the VESA local bus, a bus architecture allows an administrator to boot a machine at a
introduced by the Video Electronics Standards remote location. After the machine is working,
Association (VESA), in which up to three adapter the administrator can perform maintenance
slots are built into the motherboard. The VL bus tasks, such as backups and virus scans, during
allows for bus mastering. off hours. The network adapter maintains a
very low power state even when the computer
VLAN (virtual local area network) A log- is powered off. The NIC then looks for special
ical grouping of hosts on one or more LANs packets on the network indicating it should
that allows communication to occur between wake up the machine.
hosts as if they were on the same physical LAN
or segment. WAN (wide area network) Network that
expands LANs to include networks outside of
VLSI (Very Large Scale Integration) Tech- the local environment and also to distribute
nology used by chip manufacturers to integrate resources across distances.
the functions of several small chips into one chip.
warm boot Refers to pressing Control+Alt+
volts Unit of electrical potential. Delete to reboot the computer. This type of
VPN (virtual private network) A virtual booting doesn’t require the computer to per-
private network (VPN) is similar to a dial-in form all of the hardware and memory checks
connection in that it allows users to access their that a cold boot does.
network remotely. weekly rotation A backup rotation schedule
The VPN connection is encrypted, and that involves using a dedicated backup media
because all communication is encapsulated for each day of the week.
within the VPN protocol, users can access
network resources through the VPN that they wide area network See WAN (wide area
would otherwise be unable to see using stan- network).
dard TCP/IP connectivity.
Wide Ultra-2 SCSI This release provided
VRAM See video RAM (VRAM). LVD or HVD signaling, a 16-bit wide bus,
transfer speeds of 80MBps, and LVD or HVD Windows Installer A new method Microsoft
termination; it used a Centronics 68-pin is using to allow users to customize their appli-
connector. cation installations more easily. The Windows
Installer also makes it easier for users to install
window In a graphical user interface, a approved software on secured workstations
rectangular portion of the screen that acts and can automatically repair damaged installs.
as a viewing area for application programs.
Windows can be tiled or cascaded and can Windows Internet Name Service (WINS)
be individually moved and sized on the screen. WINS provides a database for the storage and
Some programs can open multiple document retrieval of NetBIOS computer names. Each
windows inside their application window to client must register with the WINS server to be
display several word processing or spreadsheet able to be added to and query the database.
data files at the same time.
Windows NT A 32-bit multitasking portable
Windows 95 Windows 95 is a 32-bit, operating system developed by Microsoft. Win-
multitasking, multithreaded operating system dows NT is designed as a portable operating
capable of running DOS, Windows 3.1, and system, and initial versions ran on Intel 80386
Windows 95 applications; supports Plug and (or later) processors and RISC processors, such
Play (on the appropriate hardware); and adds as the MIPS R4000 and the DEC Alpha.
an enhanced FAT file system in the Virtual
FAT, which allows long filenames of up to 255 Windows Program Manager Windows 3.x
characters while also supporting the DOS 8.3 file that contains all of the program icons,
file-naming conventions. group icons, and menus used for organizing,
starting, and running programs.
Windows 98 The home PC operating system
released by Microsoft, as the successor to their WIN.INI File that contains Windows environ-
popular Windows 95 operating system. Basically mental settings that control the environment’s
the same as Windows 95, it offers a few improve- general function and appearance.
ments. For example, Windows 98 improves WINIPCFG In Windows 9x, this is the utility
upon the basic “look and feel” of Windows 95 that allows you to view your current TCP/IP
with a “browser-like” interface. It also contains configuration. It also allows a user to request
bug-fixes and can support two monitors simulta- a new IP configuration from a DHCP server.
neously. In addition to new interface features, it
includes support for new hardware, including WinNuke An attack that is effective because
Universal Serial Bus devices. of the way the Windows TCP/IP stack handles
bad data in the TCP header. Instead of
Windows 2000 Windows operating system returning an error code, or rejecting the bad
that incorporates the “look and feel” of Win- data, it sends the operating system to the blue
dows 9x with the power of Windows NT. screen of death. A security patch is available
Windows Desktop See Desktop. from Microsoft to fix this.
wizard Wizards are preprogrammed utilities World Wide Web (WWW) This is the graph-
that walk the user through a particular task. ical extension of the Internet that features mil-
Each wizard generally includes a number of dif- lions of pages of information accessed though
ferent pages, each of which allows you to enter the use of the Hypertext Transfer Protocol
information or choose particular options. At (HTTP).
the finish of the wizard, the computer will then
perform the requested task based on the infor- write-protect To prevent the addition or
mation it has gathered. deletion of files on a disk or tape. Floppy disks
have write-protect notches or small write-
WLAN (wireless local area network) A protect tabs that allow files to be read from
local area network based on wireless tech- the disk, but prevent any modifications or dele-
nology. A base station sends and receives sig- tions. Certain attributes can make individual
nals from each client station, each of which has files write-protected so they can be read but not
a supported wireless transmission card. altered or erased.
word The size of a word varies from one write-protect tab The small notch or tab in
computer to another, depending on the CPU. a floppy disk that is used to write-protect it.
For computers with a 16-bit CPU, a word is 16
bits (2 bytes). On large mainframes, a word can writing step The step in the EP print process
be as long as 64 bits (8 bytes). where the items being printed are written to the
EP drum. In this step, the laser is flashed on and
workgroup A group of individuals who off as it scans across the surface of the drum.
work together and share the same files and The area that the laser shines on is discharged
databases over a local area network. Special to almost ground (−100 volts).
groupware such as Lotus Notes coordinates the
workgroup and allows users to edit drawings x86 series The general name given to the
or documents and update the database as a Intel line of IBM-compatible CPUs.
group. XGA Acronym for Extended Graphics Array.
working directory Programs that need to XGA is only available as a micro channel archi-
save temporary files or configuration data while tecture expansion board; it is not available in
they are running do so within their working ISA or EISA form. XGA supports resolution of
directory. Users can also have a working direc- 1024 horizontal pixels by 768 vertical pixels
tory to save their temporary files. with 256 colors, as well as a VGA mode of 640
pixels by 480 pixels with 65,536 colors, and
workstation 1. In networking, any personal like the 8514/A, XGA is interlaced. XGA is
computer (other than the file server) attached to optimized for use with graphical user interfaces,
the network. 2. A high-performance computer and instead of being very good at drawing lines,
optimized for applications such as computer- it is a bit-block transfer device designed to
aided design, computer-aided engineering, or move blocks of bits like windows or dialog
scientific applications. boxes.
yearly rotation The yearly rotation builds zero wait state Describes a computer that
on the monthly rotation. Along with having can process information without wait states.
daily tapes for each weekday and weekly tapes A wait state is a clock cycle during which no
for each Friday, you also keep the last tape instructions are executed because the processor
from each month for a year. This allows you to is waiting for data from a device or from
go back daily for a week, weekly for a month, memory.
or monthly for a year.
ZIF socket Abbreviation for Zero Insertion
Z.E.N.works A family of directory-enabled Force socket. A specially designed chip socket
system management products from Novell. which makes replacing a chip easier and safer.
Z.E.N.works supports Windows and NetWare
clients. Zip A portable magnetic backup device cre-
ated by the Iomega company. Zip drives hold
zero insertion force (ZIF) A type of pro- large amounts of data (tens of gigabytes on one
cessor socket where you don’t have to “snap” cartridge), the replacement cost of the media is
the chip into the socket. Rather, you simply reasonable, and in general the media are
set the chip into the ZIF socket and push a bar reliable.
down to secure it.