You are on page 1of 74

SYSTEMS ADMINISTRATION

AND MAINTENANCE
Learning Module

Andrew Caezar A. Villegas


Charles Lawrence Javate
Engr. Ronald S. Santos
Joey Dela Cruz
Jomasel G. Savellano
Rosalie B. Sison
Randy R. Maliwat
Racquel L. Pula
Nueva Ecija University of Science and Technology
College of Information and Communications Technology
Sumacab, Cabanatuan City
TABLE OF CONTENTS

SYSTEMS ADMINISTRATION AND MAINTENANCE ....................... 0


CHAPTER I...................................................... 3
SYSTEM ADMINISTRATOR........................................... 3
Related fields ............................................... 3
Duties and Responsibilities of a System Administrator ........ 4
Recommendations for Better System Administration ............. 4
1. Using a Ticket Management System ....................... 4
2. Manage Quick Requests .................................. 5
3. Adopt Three Time-Saving Guidelines ...................... 5
Miriam-Webster Dictionary’s Definition of Emergency ........ 6
1. Start Every New Host in a Known State .................. 6
2. Make Email Work Efficiently ............................ 6
3. Document Everything .................................... 7
4. Address the major loss of time ......................... 7
5. Find a quick solution .................................. 7
6. Provide the necessary power and cooling ................ 7
7. Deploy Easy Monitoring ................................. 8
CHAPTER II..................................................... 9
WORKSTATION.................................................... 9
Overview ..................................................... 9
Learning Objectives: ......................................... 9
Setting up .................................................. 10
Lesson Proper ............................................... 10
Managing workstation operating systems leads to three simple
tasks: ...................................................... 11
............................. 15

............................... 16

........................... 17

........................................... 18

............. 19

........................... 20

.......................................... 21

1
When utilizing one, some, and many, you will have the
possibility of a failed patch technology. ................. 21
3. Network Configuration .................................... 23
............................ 25
Assessing learning .......................................... 29
Activity 1 .................................................. 29
CHAPTER III................................................... 31
SERVER........................................................ 31
................. 34
Understanding the Cost Amount of Server Hardware .......... 35
Continuing Data Integrity ................................. 40
Place Servers in your Data Center ......................... 41
Client Server OS Arrangement .............................. 41
Offer Remote Console Access ............................... 42
Enhancing Reliability and Service Ability ................. 45
CHAPTER IV.................................................... 53
SERVICES...................................................... 53
Overview .................................................... 53
Learning Objectives: ........................................ 54
1. The Basics ............................................. 54
1.2 Operational Requirements .............................. 56
1.3 Open Architecture ..................................... 57
1.4 Simplicity ............................................ 59
1.5 Vendor Relations ...................................... 59
1.6 Machine Independence .................................. 60
1.7 Environment ........................................... 61
1.8 Restricted Access ..................................... 61
1.9 Reliability ........................................... 62
1.10 Single or Multiple Servers ........................... 63
1.11 Centralization and Standards ......................... 64
1.12 Performance .......................................... 65
1.13 Monitoring ........................................... 65
1.14 Service Rollout ...................................... 66
2. The Icing ............................................. 66
3. Conclusion ............................................ 71

2
The system administrator, IT system administrator, system
administrator, or sys admin is a person employed to manage and
maintain a computer system and/or a network. System administrators
could be members of the Information Technology (IT) Department or
the Department of Management Information Systems (MIS).

Most companies are filling other positions related to


system administration. Within a larger organization, most of these
may be separate positions within the Technical Support or
Information Services (IS) department.
• The Database Administrator (DBA) manages a database
infrastructure and is responsible for data integrity and
infrastructure reliability and accuracy.

• The network administrator manages network equipment, such as


switches and routers, and detects issues with or with the
actions of computers linked to the network.

• The security administrator is a computer and network security


expert, performing the management of security devices like
firewalls, as well as general security consultations.

• A web administrator maintains web server services (such as


Apache or IIS) that allow internal or external access to web
pages. Tasks involve managing multiple websites, managing
security, and installing the required components and
software. Duties can also require the management of software
changes.

• Technical support staff address the issues of individual


users with computer systems, gives
instructions and often training, and analyze and fix specific
problems.

3
The responsibilities of the system administrator are broad in
nature and vary significantly from one organization to another.
System administrators are usually responsible for setting up,
supporting and maintaining servers or other computer systems, and
for preparing and responding to service interruptions and other
issues. Certain roles may include scripting or basic programming,
project management for systems-related projects, supervision or
training of computer operators, and advising on computer problems
beyond the expertise of technical support personnel. To order to
do his or her job well, the system administrator must show a
combination of technical knowledge and responsibility.
Some of the duties and responsibilities of a System Administrator:

• Morning system or software inspections.


• Performing data backups.
• Apply updates to the operating system and adjust
settings.
• Installation and configuration of new hardware or
software.
• Add / delete / create / modify user account information,
reset passwords, etc.
• Responding to technical questions.
• Responsible for security.
• Responsible for documenting the system setup.
• Troubleshooting any identified issues or problems.
• Keep the network up and running.

Here are a couple of things you can do:

Server administrators get too many requests to memorize all


of them. You need tools to monitor the influx of requests that you
receive. Whether you call this system request management or
troubleshooting-tracker, you need it. If you are the only system
administrator, you need at least a PDA to manage your list of
tasks.

4
Have you ever realized how difficult it is to do something
when people keep disrupting you? Too much distractions make it
difficult to complete any long-term projects. To address this,
organize the System Administration team in such a way that one
member is the shield to handle regular interruptions, and thus let
everyone else focus on their assignments uninterrupted.

Time
Tips for Improving System Administration
• How do people get help?
• What is the scope of responsibility of the System
Administration team?
• What’s our definition of emergency?

First, there's a guideline about how people get support. Since


you have just implemented the ticket management system, this
guideline not only informs the user that it exists, but also shows
them how to use it. The main part of this strategy is to find out
that users are going to have to change their habits and no longer
stay around your office, keeping you out of work. (Or, if this is
still necessary, they should be at the desk.

The second guideline describes the extent of the duty of the


System Administration Team. This information shall be conveyed to
both the System Administrators and the customer base. New system
administrators have trouble saying no and end up overloaded and
doing other people's jobs for them. Hand holding becomes "let me
do that for you," and helpful guidance soon becomes a situation in
which the System Administrator spends time maintaining software
and equipment that is not of positive benefit to the company. Older
system administrators acquire the habit of curmudgeonly saying not
so much, much to the disadvantage of any effort by management to
make the team appears to be supportive.

The third guideline describes an emergency situation. If the


System Administrator is reluctant to say no to consumers because
they believe that any complaint is an emergency, the implementation
of this strategy may go a long way towards allowing the System
Administrators to repair leaking pipes rather than spend the entire
day mopping the floor. In some organizations, this guideline is
easier to formulate than others

5
• an unforeseen combination of circumstances or the resulting state that
calls for immediate action
• an urgent need for assistance or relief

Those three guidelines will give the overloaded systems


administration staff the breathing space they need to turn things
around.

Eventually, we are amazed at how many sites do not even have


a systematic way loading the operating system (OS) of hosts they
install. Every Modern operating system provides a way to automate
the deployment process. Normally, the system is booted from a
server that downloads a small program that helps prepare the disk,
loads the operating system, loads applications, and then installs
any locally specified installation scripts. The last step is
something we have control, we could even add applications,
configure options, etc. Finally, the system reboots and is ready
for use.

Automation like this has two advantages: time savings and


repeatability.

The time saving reflects the fact that the manual process has
been replaced by automation. One could start the process and
perform other tasks while the automated installation finish.

Repeatability means that every time you create correctly


installed machines, you are able to create them precisely and
consistently. Having them correct signifies less testing before
deployment. (You test a workstation before you give it to someone
else, don't you?) Repeatability saves a lot of time at the help
desk; users can be better supported when help desk staff can expect
a level of consistency in the systems they support. Repeatability
also means that users are treated fairly; people will not be
surprised to find that their workstations lack the software or
features that their colleagues have received.

People who approve your finances are significant enough for


the organizational hierarchy to use email and calendar only if
they exist. If these applications are stable and reliable,
management will have more confidence in the team. Resource requests
will be made easier. Having a stable email system could even

6
provide you with excellent coverage as you fight other battles.
Please ensure that management support staff will also see positive
changes. Often, these people are the ones who run the company.

Documentation does not have to be a major burden; organized


a wiki, or create a text file directory on a file server. Start
creating checklists for common activities, like how to set up a
new employee or how to set up user's email. Once recorded, it is
easier to delegate these roles to a junior employee or a new
recruit. Labeling of physical devices is helpful for the
organization to avoid errors and makes it much easier for new
people to help out. Implement a policy that you must pause to label
an unknown device before working on it, even though you're in a
rush. Label the front and back of the device. Stick a label with
the same text on both the power adapter and the device.

Choose the single largest time drain, and allocate one


individual to it until it has been fixed. This may mean that the
rest of the team will have to work a little harder for the meantime,
but it'll be worth fixing that problem. This individual will
provide regular updates and ask for support when the technical or
political dependencies are blocked, if necessary.

When stuck in a hole, one is entirely justified in


strategically choosing short-term solutions to a variety of issues
so that the few major high-impact projects can be completed. Keep
a list of long-term options that have been postponed. When
stability has been established, use that list to prepare the next
round of projects.

Please ensure that every computer room has enough cooling and
electricity. Each device should receive its power from an
uninterruptible power supply (UPS). Nonetheless, when you're
trying to climb out of a hole, it's smart enough to make sure the
most critical servers and network equipment are on the UPS.
Specific UPS — one at the base of each rack — may be a good short-
term solution. UPSs should have enough battery capacity for servers
to sustain a 1-hour downtime and smoothly shut down before the
batteries run out. Power failures longer than one hour seem to be

7
very rare. Many of the outages are calculated in seconds. Small
UPSs are a good option before a larger UPS that can accommodate
the whole data center is placed. If you purchase a small UPS, be
sure to ask the supplier exactly what sort socket is needed for a
specific model. You would be amazed how many people need something
special. Cooling is more important than heat. Each watt of power
the machine consumes produces a certain amount of heat. Thanks to
the laws of thermodynamics, more than 1 watt of energy is expended
to provide cooling for the heat produced by 1 watt of computing
power. That is, it's really common to spend more than 50 percent
of your energy on cooling.

While we would prefer to have a comprehensive monitoring


system with loads of bells and whistles, a lot can be achieved by
getting one that pings key servers and alerts users to an issue
through email.

8
This unit aims to give you an understanding of the operating
system, and how OS functions in a computer system with other
hardware. Unlike servers, typically large quantities of desktops
are deployed, each with in almost the same configuration. All
computers have a life cycle that starts when the OS is activated
and finishes when the device is turned off the last time. As a
consequence of the entropy during this time, the program on the
machine is corrupted, modified, and reloaded from scratch when the
operation is restarted. Ideally, all host platforms begin with the
same configuration and should be updated in parallel.

At the end of the unit, I am able to:


1. provide a description of operating system and how OS int
eracts with other computer hardware;
2. give a list of the tasks involved in the installation, c
onfiguration and maintenance of the operating system;
and
3. differentiate particular kinds of operating systems.

9
Name : ____________________________________________ Date: ________________
Course/Year/Section: ________________________________________

Directions: Briefly respond to each of the following questions posted below.

1. What is DHCP?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

2. Explain the role of System Administrator.


____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

3. Can you name different types of operating system?


____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

4. What is DNS and which port does it use?


____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

5. What is the difference between a work station and server?


____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

10
At first, loading system software and applications, upgrading
system software and applications, and setting network parameters,
whether you don't get these three things right, whether they don't
work in the same manner in both processes, or if you left them all
out, it's going to be more complicated to do something else. When
you do not consistently load the operating system onto hosts, you
will find yourself with a nightmare of support. If you can't update
and repair programs easily, you won't be able to enforce them.
When the network implementations are not handled by a centralized
device like a DHCP server, it would be difficult to make the
slightest adjustment to the network. Automating certain tasks is
making a world of distinction.

We describe a workstation to be a computer hardware dedicated


to the work of a single client. Typically, this involves a desktop
or notebook Computer for a customer, and we prefer to remotely
monitor additionally Computers, virtual devices, and dockable
laptops in trendy environments, among others. Remember the life
span of a device and its operating system. Through his article
"Review" (Evard, 1997), Re'my Evard's associate degree offered
outstanding care for this in UNIX Device Setup. Despite its focus
on UNIX hosts, it can be extrapolated to others. The layout he
created as seen in Figure 3.1.

New
Rebuild

Build Updat
e
Entro
py
Clea Initial Configur Unknown
n ed
Debug

Retire
Of
f

Figure 3.1 Evard’s life cycle of a machine and its OS


Five states are illustrated in the diagram: NEW, CLEAN, CONFIGURED,
UNKNOWN and OFF.

• NEW refers to a whole new machine.


• CLEAN refers to a machine where the operating system was
installed but no localization were performed on it.
• CONFIGURED means an operational and properly configured
setting.

11
• UNKNOWN is a computer which has been misconfigured or is
obsolete.
• OFF refers to a retired and powered-off machine.

There are several options to bring a lifestyle from one


society to another. System building and initializing processes are
typically one stage at most sites; they result in loading of the
OS and getting it into a functional state. Entropy is degradation
as a result of we tend to don't want the machine to be left in an
uncertain condition that is resolved by a debug method. Updates,
often in the form of patches and security updates happen over time.
Wiping and reloading a device always makes sense, because it's
time for a large OS upgrade, the machine has to be re-created for
a different reason, or utter entropy has clearly rendered it the
last option. The restoration process takes place, and the machine
is cleaned and reloaded to restore it to the optimized condition.
These complex processes replicate themselves as months and years
go by. Finally, the computer becomes useless and retires. It
suffers a horrible death, whether it is held back as the designer
defines it.

How do we know from this diagram? First of all, it is


necessary to recognize that there are multiple phases and
transformations. We’re arranging deployment period, deciding on
things that could disrupt and require maintenance, and so on. We
don't act as if every repair is a surprise; rather, if the volume
warrants it, we set up a repair process or even a whole repair
team. All these things demand coming up with, personnel and other
resources. Third, we find that the machine is only available in a
calibrated condition, while there are other settings. We want to
increase the amount of time spent in that environment. Many other
procedures are associated with bringing the computer into or
restoring it to a calibrated condition. The set-up and recovery of
such processes should therefore be swift, economical and, we
expect, automated.

We need to insure that the OS degrades as gradually as


possible in order to maximize the period spent in an optimized
environment. The architecture decisions of the OS manufacturer
have the biggest influence here. Most OSs require new applications
to be enabled by installing files into different system
directories, making it impossible to see which files are part of
the package. Many OSs make it easy to locate add-ons nearly
everywhere. Microsoft's Windows series is known for its problems
in this area. In the other side, because UNIX imposes strict
permissions in files, programs enabled by the user can not violate
the protection of the Shell.

12
Architectural decisions taken by the SA can reinforce or
weaken the integrity of the OS. Is there a well-defined place
beyond the network areas for third party apps to be installed?
Source, or Manager, has the customer been given access and
therefore enhanced entropy? Has the SA developed a way for users
to carry out certain administrative tasks without the absolute
root power? SAs will find a compromise between giving optimal
exposure to and restricting consumers. This equilibrium has an
effect on the pace at which the operating system can decline.

Manual installation is vulnerable to error. When mistakes are


created during delivery, the host can continue life at the head
start of the decay cycle. Unless the implementation is completely
machine-driven, the proper delivery of new workstations is needed.

Re-installation the process of rebuilding is similar to


installation, except that one could have to carry old data and
applications forward. In the early stages, the SA’s decisions
verify how simple or difficult the process will become. It's
simpler to re-install because no data is saved on your device.
This involves saving as many data as possible on a Workstation
File Server, so that re-installation cannot inadvertently erase
data. This requires loading data to a unified cloud file network.

Lastly, this model assumes computers are ultimately retired.


We shouldn't be surprised: computers never last. Different
activities are correlated with the replacement of a laptop. As
with the re-installation, certain files and programs must be
transferred to the new computer or saved on the disk for future
reference; else they can be lost in the sands of time.

Management is still insensitive to administration of the


computer life-cycle. Managers need to know about financial
planning: Depreciation of assets should be matched with the asset's
projected life-cycle. Suppose certain hard products were
depreciated under the company's 5-year cycle. Computers are
supposed to withdraw after 3 years. And you're not going to be
allowed to dispose of 2 years of disabled machines, which may be
a big concern. The current solution is to depreciate machine assets
over a 3-year period. Once the computer life-cycle or a less
complex condensed model is under control, it is easier for SAs to
secure funding for a particular deployment group, a maintenance
team, and so on.

Three main problems apply to the management of the operating


systems at workstations:

13
• To initially load the software and the computer applications
• Upgrade of machine functionality and programs
• Network configuration parameters

For any platform that is commonly used on your site, these


three tasks can be automated to manage your site cost-effectively.
Several other tasks help to get these things done properly.

If the web includes just a few hosts who use a growing


interface, the implementation of robust automation is challenging
to explain. You can wish you had a robust automation that you were
meant to invest in sooner, when the web expands later. It is
crucial to know as you get near to that level, whether through
observation, utilizing business plan development goals, or
tracking consumer demands.

The manufacturer has a specific name for its automatic OS


loading systems: Solaris has JumpStart; RedHat Linux has
KickStart; SGI IRIX has RoboInst; HP-UX has Ignite-UX; and
Microsoft Windows has Server Remote Installation. Automation
addresses a wide variety of challenges, although not all of them
are scientific. First of all, it saves money on software. The time
gained by replacing a manual method with an automatic machine is
definitely a major advantage. Automation also does away with two
hidden costs. The first pertains to errors: Manual systems are
human error prone. A workstation has thousands of settings for the
potential, often in one program. A small device mis-configuration
can cause significant failure. Often it is easy to address this
problem: When someone accesses a problem application immediately
after the workstation is shipped and reports it soon, the SA can
quickly infer that the system is having a configuration problem.
However, these concerns sometimes lurk unnoticed for months or
years before the consumer accesses the application in issue.

How should the SA recommend asking whether the client is using


this program for the first time at that point? The SA also spends
a lot of time in this situation looking for a problem that would
not have existed had the installation been automated. Why do you
think this is "reloading the app" solves so many problems regarding
customer support?

The second secret expense relates to non-uniformity: if you


load the operating system manually, you can never get the same
configuration on all your computers. Once we manually loaded
applications on PCs, we noticed that no amount of SA instruction
14
will result in any machine configuring any of our applications in
precisely the same way. Sometimes, the technician missed one or
two settings; sometimes, the other option was easier. As a
consequence, consumers frequently noticed that their latest
workstations were not correctly configured, or that the consumer
moving from one workstation to another did not have the very same
configuration, so apps crashed. Automation is solving this
problem.

This takes a long time to set up an automatic deployment


program. At the end, though, the initiative will pay off by saving
you more money than you did originally. Remember this fact as the
framework thickens, you’re mad. Always, remember that if you set
up an automatic program, do so properly; otherwise, it can trigger
you twice as much trouble later.

The most critical part of the automation process is that it


needs to be completely implemented. The claim sounds easy, but it
could be a different story to implement. We believe it is worth
the extra work not to have to go back to the machine again and
again to address another query or to take the next phase. This
ensures the instructions do not react wrongly, and actions should
not be missed or skipped. This also improves the time control of
the SA, which may stay centered on the next step rather than trying
to recall heading back to the machine to continue the next process.

In the outset, the strongest construction programs perform


all their human tasks, and then operate unattended until
completion. Most devices require zero input because, depending on
the Ethernet Media Access Control (MAC) address of the server,
automation "knows" what to do. The technician will be willing to
step away from the unit, assured that he can finish the operation
on his own. A procedure that requires anyone to return halfway
into the project to address a query or two is not completely
programmed and lacks effectiveness. Of eg, if the SA forgets the
installation and goes to lunch or a conference before the SA
returns, the computer would stay there, doing nothing. If the SA
is out of the office and the only person that can take control of
the stuff halfway done, those who need the machine would have to
wait. Then, somebody else would attempt and finish the deployment,
producing a host that would need debugging later. Have anyone
inexperienced with your work seek to do it while you believe you've
finished automating it.

15
Setup is partly automated part of automation it is better
than Automation totally no. To perfect an installer program, one
has to build stop-gap steps. The last 1% can take more time to
automate than the first 99 percent. Lack of automation might be
justified whether there are just a few on a given network where
the expense of complete automation is greater than the time saved,
or where the vendor has done a disservice to the environment by
finding it impossible (or unsupported) to automate the method.

The easiest stop-gap measure and to provide a possibly the


best documented procedure, so that it could be done a certain way
every time. Details should be taken in the form of notes when
constructing the first method so that the different questions can
be answered the same.

One can automate the components for the installation. Many


sections of the device are especially well adapted for automation.
For illustration, the initialize method in Figure 3.1 configures
the local environment OS after the default provider has been
initially installed. It normally includes downloading, granting
passwords, and rebooting individual devices. Lifesaver may be a
script that copies a series of defined files to their appropriate
position. A tar or zip file of the files that changed during
initialization may also be created and exported to machines using
the vendor installation process. This may be much more imaginative
to take some stop-gap measures.

Many places are utilizing cloned hard disks to build new


computers. Cloning hard disks involves setting up a host with the
same program setup that is required for all servers to be deployed.
The hard disk of this host is then cloned, or copied, to every new
computer installed as it is. The initial computer is commonly
regarded as a golden host. Rather of copying the hard disk
frequently, the contents of the hard disk are typically transferred
to a CD-ROM, device, or network file system that is used for
deployment. A limited sector is dedicated to helping businesses in
this phase, and will assist with sophisticated biological work in
hardware and applications. We tend to simplify the loading process
for many purposes, instead of copying the contents of the disk.

Second, the architecture of the new computer is substantially


different from that of the old machine; you have to construct a
clear master file. You don't need a lot of creativity to see how
to end up with a lot of master pictures. And, to simplify things,
if you want to make a single change to it, you have to add it to

16
every master picture. Ultimately, getting a replacement computer
with any sort with equipment that requires a fresh picture brings
significant time and cost.

Most OS vendors will not necessarily allow replicated disks,


since their deployment method requires decisions at load time,
based on considerations such as what hardware is contained. Windows
NT generates a different Security ID (SID) for each device during
the installation phase. This function could not be replicated by
the original Windows NT cloning software which caused many
problems. In the end, this problem was solved.

There, you will find a balance by using both emulation and cloning.
To set up a simple OS update, certain places replicate disks and
instead utilize an automatic package distribution method to top-
level all apps and updates. Nearly all locations follow a common
OS install file, which then "post" programs or modify code to a
device.

Last but not least, some OS vendors do not offer installation


automation methods. But home-grown choices do exist. In SunOS 4.x,
something like Solaris' JumpStart was used, and a few places loaded
the OS from a CD-ROM and then run a script to complete the cycle.
The CD-ROM presented a defined state to the PC and performed the
remainder of the document.

Computers normally come preloaded with the OS. Understanding


this, you may think that you don't have to bother reloading an
operating system that someone has already loaded for you. Honestly,
we believe that reloading the operating system would make your
life simpler in the long run.

It's better to rebuild the software from scratch for a variety


of purposes. Next, you actually have to contend with installing
other programs and finding them on top of the vendor-loaded OS
until the system operates on your platform. Automating the whole
loading component from scratch is often simpler than layering
programs and settings over the OS installation of the manufacturer.
Second, vendors may change their preloaded OS settings for their
own reasons, without notice to anyone; loading from scratch will

17
give you an established state on either device. Pre-installed App
victimization refers to a break from the regular settings.
Inevitably, these gaps can contribute to issues.

Another justification for stopping using a preloaded


operating system is that hosts will inevitably have an OS reload.
Example, a hard disk may fail and be replaced with a blank disk,
or you may have a strategy of reloading the OS of a workstation as
it transfers from one workstation to another. Some of the computers
run preloaded OSs while some operate manually built OSs, two
architectures will be accepted.

There will be variations of them. You don't want to find that


without the supplier's support you can't load and configure a
server, smack during an emergency. The aforesaid anecdote
describes an OS dating back a long time ago. Yet history does
repeat itself. PC vendors preload the operating system as well as
numerous applications, add-ons, and drivers. The system-supplied
OS reload disks also require testing the add-ons. Submissions would
not often be heard as they are free devices and are not worth what
they pay for. They can be vital computer pilots, too. It is
especially relevant for laptops that also require drivers that
don't come with the standard version of the OS.

Since custom laptop-specific hardware has moved to common


generic components, this issue has become less severe over time.
Microsoft has reacted to the need to render the hardware on which
it was developed less reliant on its operating systems. Although
the scenario has evolved over time from a low-level consumer point
of view, manufacturers have managed to differentiate themselves by
incorporating specialized computer applications into different
models. But this failure aims to build a common picture that will
function on all platforms.

Many vendors can preload an image of a specific disk that you


have. Not only does this program save you from trying to mount the
gadgets manually, it also allows you know precisely what is being
mounted. Nevertheless, when hardware and models alter, you do have
the duty to upgrade the master picture.

If the OS deployment is completely manual or fully automated,


you can improve performance by using a standardized checklist to
insure the technicians are prepared to do so. Don't forget the
moves. When the installation is absolutely manual, then the value
of such a checklist is clear. Even an administrator of a solo

18
system who thinks "all OS loads are consistent because I do them
myself" may find benefits in using a written check list. If
anything, your checklists can serve as the basis for training a
new system administrator or free up your time to obey your
checklists by training a trustworthy clerk.

However, if the OS becomes completely programmed, a decent


checklist would also be helpful. Many activities cannot be
automated as they are practical movements such as initiating the
process, checking that the machine functions, washing the system
before it is delivered, or providing the customer the option of
mouse pads. Other related tasks may be included in the checklist:
updating inventory lists, reordering network cables if you are
within a certain cap, and verifying if the client has any issues
or concerns a week later.

Wouldn't it be a nice thing if the function of the SA had


ended until the OS and applications had been loaded? Sadly, as
time goes by, people keep discovering new bugs and new protection
holes, both of which need to be fixed. We are still considering
exciting emerging innovations that need to be applied. All those
activities are updates to the applications. Someone has to look
after them and someone is you. But don't worry; you don't have to
waste all your time updating. Updates can be automated, as with
installation, saving both time and effort.

“Every vendor has a different name for their software update


automation system: Solaris, AutoPatch; Microsoft Windows, SMS; and
various people has written layers on top of Red Hat Linux RPMs,
SGI IRIX's RoboInst, and HP-UX 's Software Distributor (SD-UX).
Other systems are solutions with multi platforms” (Ressman and
Valde 2000).

A software maintenance programs should be fairly generic to


be able to launch new applications, upgrade applications and patch
the operating system. If a system will only produce updates, it
can package new programs as though they were updates. These
programs can often be used to allow small changes to certain hosts
that need to be made. A minor configuration update, such as a new
/ etc. / ntp.conf, may be packaged into a patch and installed
immediately.

Some systems are able to have post-install scripts programs


running to complete any modifications that are necessary to install

19
the package. One could even build a bundle that only contains a
post-install script as a way to deploy a complicated update.

Automating software updates are similar to automate the initial


installation, but in several important ways, is also special.

• Host in functional condition. Updates are rendered to


machines in good operating condition, while the initial
loading process requires additional work, such as
partitioning disks and deducting network parameters. In
reality, initial loading will function on a disabled host,
such as a completely blanked hard drive.

• Host in an office. Update programs will be ready to do the


job on the host's local network. We cannot overload the
network or disrupt other network users. The initial loading
process should be carried out in a laboratory where special
instruments can be found. Big locations, for example, may
have unique equipment, including a high-capacity network,
where machines are stored prior to distribution to the new
owner’s office.

• Do not require physical access. Updates do not require a


physical visit that disrupts customers; it is also difficult
to arrange them. Missed meetings, holiday clients and
computers in locked-in offices all add to the rescheduling
meetings nightmare. Neither will physical meetings be
automated.

• Host is already in service. Updates require a computer that


has been in operation for some time; thus, until the upgrade
is done, the customer expects that it will be available. The
machine can't be fucked up! In example, you should uninstall
the disk and continue from scratch when the initial OS load
fails.

• The host may not be in a "safe condition." Automation will


also be more cautious, because the Apps might have decayed
since initial deployment. The condition of the computer is
more regulated during initial loading.

• The host can have users on "online." When a computer is in


operation certain updates cannot be enabled. Microsoft's
Configuration Management Service is solving this problem by

20
installing products after the customer has entered their
username and password to sign up, just before they have
connected to the program. Two days ago, the Auto Patch
software used by Bell Laboratories sends emails to the
consumer which helps the customer to postpone the upgrade for
a couple days by making a file with a specific name in / tmp.

• Host may have vanished. During this era of computers, it is


quite likely that a host will not often be on the network
while the upgrade system is going. Update programs can no
longer expect hosts to be unchanged, but rather run after
them until they reappear, or run on a schedule through the
host itself, as well as finding that it has reconnected to
its home network.

• Can be a dual-boot host. In this era of dual-boot hosts, it


must be cautious to upgrade the systems that enter desktops
to check that they have reached the required OS. A dualboot
PC operating Windows on one partition and Linux on the other
will operate Linux for months without changes to the Windows
partition. Both Linux and Windows upgrade programs need to be
wise enough to handle this scenario.

The results of a failed OS repair cycle are different from


that of a failed OS load. The consumer would possibly not even
know that the OS was unable to boot, since the host was not
typically shipped yet. The host that is patched, though, is usually
at the person's desk; a patch that fails and leaves the computer
in an unusable state is even more noticeable and irritating.

When utilizing one, some, and many, you will have the
possibility of a failed patch technology.

• One. Initially, one computer to patch. You can buy this


computer, and there's an opportunity to get it right. If the
fix fails, develop the procedure without delay before it
operates on a specific computer.

• Some. Next attempt to repair a number of other computers.


When appropriate, you can check the automatic patching method
on your other SA workstations before implementing it to
customers. The SAs are a little more understanding. Maybe try
a handful of good clients outside the SA group.
21
• Many. When you check the device and gain trust that it does
not ruin another's hard disk, you pass gradually, gradually,
to greater and greater classes of risk-averse consumers.

There is potential for massive damage to an automated update


system. In insure that the issue is resolved, you ought to have a
clearly defined process for it. The process needs to be clearly
established because repeatable, and with each usage, you have to
continue to enhance it. If you follow the method, you’re going to
avoid disasters. Any time something is transmitted you take a
chance. Make no unnecessary risks.

The automated patch device is a clinical trial of a new anti-


influenza drug by experiment. Before you tested it on small groups
of trained volunteers, you wouldn't give an untested drug to
thousands of people; similarly, you shouldn't introduce an
automated patch program until you're confident it won't do any
serious harm. Think of how grumpy they would be if their computers
were destroyed by your patch, and they had not yet realized the
problem that the patch was supposed to solve! Below are few
suggestions for updating the original acts.

• Build a well-defined upgrade that will be spread to all hosts.


Appointment for shipping. The appointment starts a buy-in
process to get all stakeholders approved of it. This practice
avoids the distributing of trivial, non-business-critical
software packages by excessively enthusiastic SAs.

• Institute a communication plan such that those affected are


not disturbed by the changes. Carry out the program the same
way every time, because consumers consider consistency of
convenience.

• Once you are ready to execute the Any System, define and use
output metrics, such as when no errors exist, then the
successor category is around 50 percent better than the
previous community. If a single malfunction happens, the
population of the group falls to a single host and begins to
increase again.

• Ultimately, should things go terribly wrong, provide a way


for consumers to disrupt the distribution process. The record
on the hearings would demonstrate who has the right to order
a delay, whether to seek it, who has the authority to approve
the motion, and what comes next.

22
The third component you need for a broad workstation area is
an automatic way to adjust network settings, small bits of
knowledge often correlated with booting a device and bringing it
onto the network. The knowledge in them is strongly customized to
the subnet or also to a particular host. It functionality is in
comparison to the server delivery system in which all hosts with
the same environment are distributed with the same code. As a
consequence, the software upgrade automated network parameter is
typically distinct from other programs.

For simplifying this method, the most common program is DHCP.


Many providers provide DHCP servers that can be set up in seconds;
other servers take much longer to set up. This requires a lot of
work and technical expertise to build a regional DNS / DHCP network
of tens or hundreds of pages. Many DHCP companies have skilled
support organizations that can assist you in a method that can be
of special value to a global business.

A small business does not see the importance of encouraging


you to invest a day or more studying software that, when you set
up a computer, would obviously save you from what appears to be
only a minute or two at work. Manually accessing an IP address is
not a big problem, so a netmask so a few other criteria are not
inserted manually either. Yeah, exactly?

Unprecise. Of example, not setting up a DHCP server will save


you a day or two. But here's the problem: remember the hidden costs
we described at the beginning of this chapter? Sooner than later,
if you don't use DHCP, they'll raise their ugly heads. You can
eventually renumber the IP subnet or update the IP address of the
subnet netmask, the Domain Name Service (DNS) domain, or modify
some network feature.

When you don't have DHCP, you’re going to waste weeks or


months on a single move, so you're going to have to coordinate
people's teams to meet every host on the network. The limited
expenditure in DHCP makes any potential improvements practically
free down the track.

The profit corresponds to something worth the effort. DHCP is


seeing the best and worst of its own situation. The next segment
addresses what we have observed.

23
DHCP programs can provide a design structure. Many DHCP
systems store the different parameters which are provided to each
host. Many DHCP systems store templates that define which
parameters are provided to specific host groups. The drawback with
the models is that if you want to make the same changes on specific
hosts, you only have to change the configuration, which is far
easier than walking through a long list with hosts, trying to
figure out which ones you need to alter.

Another downside is that if a program produces a document, it


is much difficult to find a syntax error in a configuration file.
If the templates are syntactically correct, the implementation
will also be correct. DHCP Such a device should not be tricky.
Many SAs are writing small programs to create their own models.
The host list is contained in a database or also in a plain text
file, and this data is used by the software to automate the
configuration of the DHCP server. Instead of adding the specific
host details in a separate system or building a complex
spreadsheet, you should add the information to the existing product
database or server. For ex-large, UNIX sites will easily insert it
in the / etc / other file that has already been managed. A software
that automatically erases the DHCP configuration will then use
this script. The following sample lines are from such a file:

Some outdated program that looks at this file can use the #
DHCP= token as a reference. Nevertheless, the DHCP server
configuration program uses other codes to decide whether to create
for that device. Adagio, talpc, and sec4 hosts are supported with
the appropriate setup for the Sun workstation, the Windows NT
server, and the HP LaserJet 4 printer respectively. Server ostenato
is an X-Terminal NCD that boots a barney named the Trivial File
Transfer Protocol (TFTP) node. The NCD system uses a key, allowing
it generic enough to interpret the configuration file from the
TFTP server for all hosts who use it.

The last four lines suggest that Tom's laptop will have a
separate IP address depending on the four subnets it can be linked
to: his office, house, or fourth or fifth floor laboratories. Note
that while we use static assignments, a host can still jump
networks. We reduced the potential for typos by embedding this
information in a /etc/others file.

If the information is in a different file, then the data can


become inconsistent. This way, it may include other parameters.
One place places this details along with other tokens that
suggested JumpStart and other parameters in the UNIX / etc / hosts
comments page. The script derives this knowledge from setup data,
DHCP configuration files, and other JumpStart settings. An SA has
been able to do massive quantities of research by editing one

24
single file! The Host DB5 open source initiative expands on this
idea, modifying a single file to construct and distribute DHCP and
DNS configuration data to appropriate servers.

Typically, DHCP assigns a common host to a particular IP


address. The DHCP dynamic lease function enables you to designate
a set of IP addresses to be provided to hosts. Such hosts through
have a separate IP address each time they link to the network. The
downside is that this is less feasible for network operators and
more useful for customers.

Since this feature is so commonly known, many people assume


that this is how DHCP can assign emails. No, it's not. Locking a
single host to a different IP address is often better; this is
particularly valid for servers whose IP address is in certain
configuration directories, such as DNS servers and firewalls. This
technique is called RFC static assignment, or Microsoft DHCP
permanent lease servers.

The best time to use a distributed pool is when you have a


number of hosts seeking a few IP addresses. For example, you can
have a Remote Access Server (RAS) with 200 modems for thousands of
hosts that could be dialed in. It will be fair to provide a diverse
list of 220 addresses in this case. Another explanation may be a
network with a large turnover of transient visitors, such as a
laboratory test bed, a device assembly space, or a guest laptop
network. For such instances, only a small amount of machines may
provide adequate physical space or ports. The scale of the IP
address pool could be significantly higher than this limit.

Typical office LANs are best suitable to progressively


distributed leases. Nonetheless, there are drawbacks to the sale
of standardized leases for particular equipment. Of starters, by
ensuring that each machine already retains the same scientific
discipline address, substitute computers will not have the
potential to obtain scientific discipline addresses until the pool
is drained. Consider the lake getting emptied with those of us
heading to associate degree jobs and that the boss is unable to
reach one item as a consequence of the absence of a technical field
address for the laptop to be used.

Another explanation why scientific discipline handle is


automatically distributed is that this increases log usability. If
the same address of scientific discipline is normally allocated to
different workstations, logs may be seen at a set address of

25
scientific discipline on an endless basis. Many computer packages
consider you poor brick with a diverse bunch in their science
discipline. This example, though, is extraordinary. Rare, static
tasks are avoiding such problems.

The exclusive use of scientific discipline addresses allotted


arbitrarily isn't a secure fortification. Several sites disable
any dynamic assignment, thinking that uninvited guests are
stopping these from coming back from their network. The truth is
that someone will perpetually manually customize network setups.
Code that permits one to simply snoop network packets shows ample
info to permit one to guess that scientific discipline addresses
are unused, what internet mask is, what DNS settings are, the
default entryway, etc.

IEEE 802.1x could be a reasonable way to do that. The Network


Access management framework specifies whether or not a replacement
host are enabled on a network operate. Network access management
has been used principally on Wi-Fi networks and on wired networks,
more and more. Associate degree local area network transfer that
embraces 802.1x stops the newly attached host from being
disconnected from the network until it performs some form of
security enabled whether or not the encoding succeeds or fails,
traffic is authorized or the host is refused entry to the network.

Before 802.1x was fabricated, variety of individuals developed


similar ideas. You, during a building or public house wherever the
network was engineered to form it simple to urge on the network
except for authorization, merely had access to a web-page. If you
have passed the authorization – you have received the entry either
by showing some valid documentation or by paying with a master
card. In these instances, SAs would really like associate degree
address pool to be simple plug-in and go whereas having the ability
to evidence that users are allowed to use company, university, or
building services. See Gesture (1999), and Valian and Watson (1999)
for more detail on early approaches and strategies Their systems
allow users to identify unregistered hosts, and the World Health
Organization to assume care of any damage sustained by such
unauthorized hosts.

DHCP Dynamic DNS Server Updates are not limited by DHCP


programs. This glamorous feature introduces needless difficulty
and protection threats. A Dynamic DNS network client asks the DHCP
26
server what their domain name will be, and the DHCP server often
sends notifications to the DNS server. (The user host will also
submit notifications to the DNS server). Despite that networks the
machine is connected to, the host-name corresponds to the DNS Hosts
with static leases which will perpetually have identical name in
DNS as a result of identical scientific discipline address is
usually given.

Once mistreatment dynamic leases, the scientific discipline


address of the host is from a pool of addresses, every of which
generally incorporates a conventional name, in DNS, like DHCP-
pool-10, DHCP-pool-11, DHCP-pool-12. No matter host receives the
tenth address within the list, its name are DHCP-pool ten in DNS.
It might presumably be incompatible with the native configuration
of the host-name keep in it.

The inconsistency is pointless, on condition that a server


may be a machine. Such that, once a bunch does not extremely run
any services, nobody should pertain to that by name and despite
that names is stipulated for it in DNS. The system ought to be
earning a permanent DHCP lease once the host runs services, and it
ought to still have identical name. Applications intended to
connect to consumers will not use DNS to identify hosts. One
example is peer-to - peer platforms that allow hosts to exchange
files or connect through voice or video. Upon joining the peer-to
- peer program, each host registers his / her scientific discipline
address with a central written account that uses a defined name
and/or scientific discipline address.

This methodology is employed by the collaboration code, like


Windows Net-meeting. Lease a bunch choose its own host-name may be
a safety risk. Host-name is managed by a centralized authority and
not by the domain customer. What happens if a bunch that has
identical name as a vital server is configured? That one would
assume the particular server is that the DNS / DHCP system?

Most dynamic DNS / DHCP systems enable you to lock vital


server names, which suggest you wish to manage and examine the
vital server list as a replacement name-space. If you miss a
replacement server unknowingly, you'd have a catastrophe waiting
to visualize it occur. Prohibits cases within which shoppers are
during a scenario to disturb others by creating their obvious
mistakes.

This has long been full-fledged by local area network


architects concerning permitting shoppers to customize their own
scientific discipline address. We are going to not replicate the
error by authorizing shoppers to pick out their own host-names.
Customers typically take down a local area network before DHCP, by

27
incorrectly matching their host's scientific discipline address to
the router’s address. Customers were bi-manual over a listing of
scientific discipline addresses to use to line up their Unit. "Was
it the primary to 'automatic gateway' or was it the second? I even
have a 50/50 likelihood to urge it right. "When the consumer set
that this was incorrect, contact with the router was effectively
halted.

There are more structured procedures for other requests. A


different community of developers, for example, requires a
particular collection of tools. Each developed software release
has a tool set identified, reviewed, accepted, and deployed. To
order to match staff into the deployment program, the SAs will be
part of the phase.

Having a variety of common setups may be a positive idea or


a disaster, because the SA is the one who chooses which type to
add. The more regular setups a platform provides, the better it is
to handle them all. One approach to ensure a large variety of
configuration fits seamlessly is to make sure that each
configuration uses the same platform and frameworks instead of one
application per model. However if you invest time with a simple
streamlined program that will create and automate several
settings, you would have created something that would be a pleasure
forever. Also named Computer Configuration Management (SCM) is a
basic framework of controlled, structured settings.

This method involves both servers and desktops. In the next


unit, we will discuss servers. It should be noted here that
different configurations for server installations may be created.
While running especially specific programs, servers still have
some kind of base installation that can be defined as one of those
custom setups. If redundant cloud servers are rolled out to add
power, a full automated deployment may be a huge win. Of starters,
several websites have dedicated web servers that support static
pages, (dynamic) Common Gateway Interface (CGI) pages, or other
resources. If these various structures are generated by an
automated procedure, it is simple to carry out additional
capability in every area.

If these different configurations are created via an


automated process, it's a simple matter to roll out additional
capacity in any region. Simple setups will also relieve some of
the pressure from Software updates. If you can wipe your disk
completely and re-install it, upgrades to the OS become trivial.

28
It needs more attention in such areas as user data isolation and
host specific device data management.

Name: _________________________________________________________ Date: ________________


Course/Year/Section: ________________________________________

1. Why do we need an operating system?


________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

2. What are the three main components of most operating systems?


________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

29
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

3. System Administrator is
the person responsible for managing multi-user operating
device, including a Local Network (LAN). What are
his/her the Typical Duties?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

4. Find five different operating systems oin the internet and


provide a brief overview of each.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

30
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

5. Examine your own site or site you have recently viewed, and
mention at least three places where specific improvements
have not been made. State that they did not make the
expenditure for each of them.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________

This chapter deals with servers. In contrast with workstation


which is designed to a single-client, multiple clients rely on a
server. The reliability and uptime are also a high priority. Once
31
we analyze making a server reliable, we are about to search for
functionality that will shorten maintenance time, have a well-
working atmosphere and use different care in the procedure of
configuration.

Vast number of clients relying on a server. Any initiative to


improve efficiency or reliability on many clients is amortized.
Servers are likely to last longer than workstations, which do
require the additional costs. Buying a processor with surplus
capacities is a chance to extend its life cycle.

Hardware sold for cloud use differs from the equipment sold
for use as a sole workstation qualitatively. Server hardware has
different characteristics, and is designed for a different
business model. Installing and maintaining servers are rendered
using special procedures. These typically have repair
arrangements, database recovery services, Software, improved
network control, and servers operate in the managed atmosphere of
a data center, where access to computer resources can be
restricted. Comprehending those changes will help you make smarter
buying decisions.

The systems that are advertised as servers vary from those that
are sold as clients or portable workstations. It is also enticing
to "save cash" by purchasing laptop hardware and filling it with
server software. This can work in the short run but is not the
best solution for the long term or you will be creating a house of
cards in a major project. Server hardware is typically more costly
but has extra features justifying the cost. These apps include:

• Extensibility. Servers typically either have more physical


space inside for hard drives and more slots inside for cards
and CPUs, or they are equipped for high-throughput interfaces
to require the use of various peripherals. Vendors typically
include advanced configurations of hardware / software that
support clustering, load-balancing, automatic fail-over, and
similar capabilities.

• More CPU performance. Servers also have various CPUs and


advanced hardware features including pre-fetch scanning,
multi-stage processor scanning and the ability to distribute

32
resources between CPUs automatically. CPUs may be available
at different speeds, each priced linearly with regard to
speed. A CPU's earliest revision appears to be excessively
costly: extra fee on cutting edge. Such an extra expense can
be more readily explained on a platform that supports many
clients. Since a server is supposed to last longer, having a
faster CPU that won't get out of date as easily is always
fair. Remember that on a server CPU speed does not always
dictate performance, as many applications are I/ O dependent,
not CPU dependent.

• High-performance I/O. Servers typically do more I/Os than


clients do. Often the volume of I / O is relative to the
quantity of clients using a quicker I / O subsystem. This can
include Small Computer System Interface or Fiber Channel-
Arbitrated Loop drives rather than Integrated Drive
Electronics, faster internal buses, or network interfaces
that are orders of magnitude faster than consumers.

• Upgrade Options. Servers are often updated, instead of simply


being replaced; they are built to expand. Servers may usually
connect CPUs or replace separate CPUs with faster ones,
without needing actual hardware changes. Server CPUs
typically operate on individual cards within the mother
board, or are placed in interchangeable replacement sockets
on the system board.

• Rack mountable. It should be rack-mountable servers. We


address the importance of rack-mounting servers in Chapter 6,
rather than stacking them. Given the fact that non-rackable
servers can be installed in racks, excess space is inefficient
and does so. Although desktop hardware in the form of a
gumdrop may have a sleek, molded plastic shell, a server
should be rectangular and built for the optimal use of space
within a rack. Since the host is already mounted on a stand,
it will be removable any covers that need to be removed to
make repairs. More specifically, the server in a rack-mounted
environment should be designed to cool and ventilate. A
machine which has only side cooling vents does not keep its
temperature in a rack as well as one which winds from front
to back. It's not enough to have the word server used in a
product name; consideration should be taken to ensure that it
fits into the allocated space. Connectors should follow a
rack-mount arrangement, like using standard cat-5 wire for
serial console instead of db-9 connector that is operated by
screw.

33
• No side-access needs. It's easier to fix or conduct repairs
on a rack-mounted host if activities can be performed when
it's still in the rack. These operations must be carried out
without getting access to the machine's sides. Both cables
will be on the wall, and the front of both drive bays should
be closed. We saw CD-ROM bays opening sideways, showing that
the host wasn't built with racks in mind. Many devices, mostly
network equipment, only allow one-way access. This means the
tool can be put "butt-in" in a crowded space, and can still
be used. Some hosts allow removal of the external plastic
case (or parts of it) in order to install the unit
successfully into a standard rack. Verify that the software
or the ventilation will not mess with this. Power switches
should be available but they should not be easy to knock
unintentionally.

• High-availability options. Most servers have various high-


accessibility selections, such as double power supplies,
Redundant Array of Interdependent Disk, several network links
and hot swap modules.

• Maintenance contracts. Vendors provide service contracts for


the server hardware that usually provide fixed delivery times
on replacement components.

• Management options. Preferably, servers will provide certain


remote control capability, such as serial port access, which
can be used to diagnose and repair bugs, and to restore a
down and working service system. Most servers do come with
internal temperature sensors and other hardware tracking
capable of warning when faults are found.

Providers constantly build software architectures to


meet market needs. Market demands in particular have forced
vendors to upgrade servers so that more systems can be built
into colocation centers, leased data centers that are paid by
the square foot.

It is important to choose suppliers known for reliable


products. Many manufacturers cut corners with consumer-grade
components; some use products that match MIL-SPEC1 specifications.
Several vendors have years of expertise in Database Architecture.
More seasoned vendors have the above features, as well as other
small rewards that can be earned only by years of business
experience. Maintenance support is not provided by providers with

34
little to no experience with servers save for the sharing of dead
hosts arrival. Speaking with other SAs may be useful in figuring
out which suppliers they are using and which ones they are
avoiding. The Program Administrators' Guild SAGE and the LOPSA are
excellent resources for the SA community. Many specific vendors
and/or product lines can be homogeneous or heterogeneous
environments — exactly the similar provider or product line.
Homogeneous systems are simpler to manage, since the planning is
reduced, servicing and fixes become smoother — one set of
replacement parts — and there is fewer finger pointing when
problems occur.

To consider the additional costs of the servers, you need to


know how machines are priced. You may need to consider how the
application operations are adding to the expense of the system.
Some vendors have three line-ups of products: house, company and
server. The home-line is typically the lowest initial buying price,
because buyers tend to make obtaining choices reliant on the price
advertised. Add-ons at a low cost and potential expandability.
Components are usually defined, such as video resolution, rather
than specific video card manufacturer and model, since ensuring
the lowest sales price possible allows vendors to adjust the
suppliers of parts on a regular or weekly basis. These computers
seem to have more of the game's features such as joysticks, high
resolution graphics and sophisticated music.

The desktop line of business appears to concentrate on overall


ownership costs. The original sales price is more expensive than
on a personal computer and the business line can take longer to
become outdated. Companies are expensive to hold vast pools of
replacement parts, not to mention the expense of educating repair
technicians on each model. Consequently, the sales line seldom
launches new products. These specific measurements can promote
product development under new hardware designs and the management
of spare-parts inventory. Rather of purchasing a ton of business-
class equipment is rented and these promises are of tremendous
benefit to a company.

The line of servers appears to concentrate on providing the


lowest performance cost metric. For example, a file server may be
built with focus on reducing the operating costs of the SPEC-
SFS973 divided by the system's purchase price. Typical metrics are
internet traffic, online transaction processing, multi-CPU
aggregate efficiency etc. Many of the previously mentioned server
features add to a machine's purchase price but also improve the

35
machine's possible uptime, making it a more desirable price /
performance ratio.

For other factors, servers are costing more too. A more user
friendly chassis can be more costly to produce. Limiting the drive
slots and other entry panels on both sides means they are not
simply positioned on minimize construction costs. However, the
marginal rise in initial sales price saves money in the mean
maintenance period (MTTR) and operating convenience in the long
run.

Therefore, because this is not an apples-to-apples example,


it is misleading to say a server costs more than a personal
machine. Knowing these various price mechanisms lets us frame the
argument when trying to justify the superficially low prices of
the server equipment. Hearing anyone complain about a server's
$50,000 price tag while they can purchase a high-end laptop for
$5,000 is common. Whether the server can handle millions of
transactions every day, or meets the needs of thousands of users
for the CPU, the expense is justified. Furthermore, downtime on
servers is more costly than downtime on desktop. Redundant and
hot-swap equipment will effectively compensate for itself by
eliminating outages on a computer.

A more reasonable case against such a procurement decision


may be that the value obtained is worse than the contract demands.
Also, efficiency is proportionate to cost and purchasing unneeded
value is inefficient. Nonetheless, purchasing an overloaded server
could delay a costly update later to add power. That also has
interest. Predictions of capacity-planning and usage patterns are
becoming useful.

Think if you can do the repairs when you buy a machine.


Finally both machines break. Four providers appear to offer a range
of contract repair options. Example, one form of maintenance
contract offers on-site support, with 4 hour response time, 12
hour response time, or next day options. Some choices include
purchasing a spare-parts kit from the customer and providing
replacements after using a spare-part.

Some reasonable scenarios to pick appropriate maintenance


contracts are as follows:

• Non-critical server. Several hosts are uncritical, such as


one of several CPU servers. In this case it is fair to have
next day or 2-day response time for a maintenance contract.

36
No contract can be needed if the default repair options are
appropriate.

• Huge groups of identical servers. Often, a platform has


several of the same machine type, probably providing various
types of services. In this situation, purchasing a spare kit
might be appropriate, so that repairs can be carried out by
local workers. The spares package costs are divided among the
different hosts. Such hosts can now include a repair contract,
which at a reduced cost merely removes parts from the spares
kit.

• Controlled introduction. Software is evolving over time, and


sites listed in the preceding article will inevitably need to
upgrade to new models, which could be beyond the scope of the
spares package. In this case, you may standardize on a given
model or set of models that share a spares kit for a set
amount of time. You could approve a new model at the end of
the period, and buy the appropriate spares kit. You would
have, for example, just two spares kits at any given time.
For add a new platform you must first decommission all hosts
depending on the retired spares kit. That holds costs in
place.

• Critical host. Often, a professionally loaded spares kit is


too costly to provide. Stock replacement parts that often
fail might be appropriate and then pay for a repair contract
with the same-day response. Commonly, hard drives and power
supplies malfunction and are also compatible with a variety
of items.

Great variety of models from same seller A, a very large


facility may follow a repair plan that involves getting an on-site
technician. Usually, this option is supported either on a platform
which has an extremely large number of servers, or sites where the
vendor's servers play a significant revenue-related role.
Nevertheless, medium-sized sites will also plan to have the
national spares kit stored on their premises, with the downside
that the technician is more likely to hang out close to your house.
The immediate access to the spares kit should be done on an
emergency basis, sometimes. (Usually, this is achieved without the
management expertise of the technician.) An SA can ensure that the
contractor uses all of his or her free time on the site by using
a computer as an administrative base to provide a small amount of
office space. Many times, in exchange, a decrease in maintenance
contract levels can be received. A contractor with little else to

37
do at one location who had this system would unbox and rack-mount
new SAs gear.

• Highly critical host. Some suppliers provide a repair


contract, which includes a technician on-site and a spare
computer ready to be swapped. This is almost as pricey as
charging for a backup server but for certain non-highly
skilled organizations it can make sense.

There's a trade-off between warehousing spare parts versus


getting a service contract. On a small place, keeping your own
spare parts may be too costly. Diagnostic services are included in
a maintenance contract, even if over internet. On the other hand,
the best way to fix it, often, is to turn into new parts before
the problem is gone. It's difficult to keep staff focused on the
full spectrum of diagnostic and repair methodologies for all the
models used, particularly for non-technology companies, which may
find it a frustrating endeavor.

Occasionally, a SA senses that a vital host is not on the


service contract. Such finding appears to come at a crucial moment,
when it has to be replaced, for example. The remedy usually
requires communicating to a salesperson who will have the machine
patched in good faith that it will be added to the contract either
immediately or retroactively. Writing 10 per cent more sales orders
for service contracts than the agreed contract price is a
reasonable idea, so that the manufacturer can raise the maintenance
payments if additional devices are added to the deal.

Reviewing the service contract and ensuring new servers are


installed and replacing redundant servers are also common
practice, at least annually if not quarterly. Once, Strata saved
a client five times the expense of its consultancy services by
modifying a provider management contract that lasted after several
years.

There are three simple ways to ensure the hosts are left out
of the deal. The first is to have a solid network of inventories
and use it to cross-referenced service contracts. Nonetheless,
strong distribution systems are hard to find and some hosts may
even skip the best.

The second is to make the person in charge of handling orders


add new equipment to the deal as well. That person should know who
to contact in order to determine the appropriate level of service.
When there is no single point of acquisition, it is possible to
consider some other point of choke in the process where the new
host will be added to the deal.

38
Third, you would have to repair a warranty problem. Most
machines offer free coverage for the first 12 months due to their
warranties and do not need to be specified on the service schedule
for those months. Nevertheless, it is impossible to recall to add
the host to the deal too many months after and the level of
operation during the warranty period is different. The SA can see
if the seller automatically places the machine on the deal then
offers a zero dollar charge for the first 12 monthly statements to
correct those issues. Some of the vendors would do this as it keeps
the host in profits. Lately, most suppliers included purchasing a
service contract while the equipment was ordered.

Service contracts are approaches which are reactive rather


than constructive. (The next chapter addresses constructive
solutions.) Service contracts guarantee spare parts and
maintenance in a timely manner. There are typically different
degrees of contracts available. The lower grades remove pieces of
ships to site; the component and install are provided by more
costly suppliers.

Cross-shipped parts are an integral part of timely repairs


and should preferably be provided under any contract of
maintenance. If a server has hardware issues, and new components
are required, some vendors allow the old broken component to return
to them. That makes sense, as the job is carried out as part of a
commitment or service arrangement without charge. The reimbursed
part has worth; for the next customer needing the part, it can be
replaced and returned to service. Also, a consumer might simply
order part after part without any return, probably selling them
for profit.

Vendors typically need notice and permission for the return


of damaged parts; this permission is called RMA. The vendor usually
offers the customer an RMA number to recognize and track the
returned items.

Some vendors do not dispatch the replacement component until


the damaged part is provided. This procedure could increase the
time for repair by a factor of 2 or more. Good vendors will
automatically ship the replacement and allow you to return the
defective component within a specified time-limit. It is called
cross-shipping; the pieces, technically, overlap as they are
shipped.

Normally, dealers need a purchase order number or offer a


credit card number to guarantee the refund if the item returned is
never released. It is a clever way to protect yourself. Obtaining
a service contract also softens this need.

39
Be wary of sellers offering to sell servers that do not have
cross-shipment under any conditions. These vendors don't even take
the word server seriously. You will be shocked to find out which
big vendors have that policy.

Buying a spare-parts package reduces reliance on the


manufacturer as it rushes to repair a server for even quicker
repair times. A kit would contain one element for every item in
the program. A package usually costs cheaper than purchasing a new
machine as, for example, if the original system requires four CPUs,
the package has to have only three. Even the kit is less costly,
so software licenses are not required. So once you buy a kit,
you'll get a support plan that can cover any part of the product
used to restore a broken device. Get one spares kit which needs
faster repair time for each model in use. Managing several spare-
parts kits can be incredibly expensive, particularly when the extra
costs are needed for a service contract. The supplier could have
other solutions, such as a service plan that promises the
availability of new parts within a few hours, which may reduce the
total costs.

Servers contain confidential details, so different software


need to mask that. Workstation clients are usually mass generated
on the same system, and normally store their data on servers,
reducing the need for backups. If a workstation's disk dies, the
configuration would be identical with its multiple parents,
unchanged from its original state, and therefore can be recreated
from an automatic installation process. These are the hypotheses.
Nevertheless, people can also save any data on their local
machines, applications will be updated locally and OSs can save
any setup data locally. This cannot be stopped on platforms
running Windows. Roaming profiles save the user settings to the
cloud each time they sign out but do not protect the device and
registry settings locally installed on the computer.

UNIX devices are to a lesser degree guilty, as a well-


configured computer with no administrator root access would
prohibit anything but a few special files from being updated to
the local disk. Local changes to crontabs (scheduled tasks) and
other files saved in /var for example will also be open. Normally
a basic program is enough to back up a couple of files each night.

40
In an area, servers should be supplied with sufficient power,
fire protection, networking, refrigeration and physical security.
This is a smart idea to devote physical space to a computer while
it is being bought. Marking the room by tapping a paper sign into
the correct rack will protect against double booked room. Marking
the space needed for power and cooling involves monitoring through
a list or table.

It is safer to mount it directly into the rack before loading


the operating system and other software after upgrading the
hardware. We also noticed the following phenomenon: A new computer
is installed in someone's room, and the OS and software are loaded
into it. Some trial customers will be made aware of the service
when the applications are being set up. Then the machine is in
constant demand when it's meant to be, because it's always in
someone's office without proper protection in the computing room,
including UPS because air conditioning. Now when it is pushed into
the computer room, the people using the device would be interrupted
by an error. Mounting the server at its final position as soon as
it is installed is the way to avoid the scenario.

Field offices aren't always big enough to have data centers


and even whole enterprises aren't big enough to have data centers.
Each will, however, have a dedicated space or closet with the bare
minimum requirements: physical security, UPS — many smaller if not
one big one — and adequate cooling. A telecom closet with decent
ventilation and a lockable door is better than getting the payroll
of the company mounted on a computer sitting under someone's desk.
Inexpensive refrigeration strategies are available, some of which
reduce the need for ventilation by re-evaporating the water they
collect and removing the exhaust air vent.

Servers do not need to run the same OS as their clients. To


allow for the difference in intended use, servers can be totally
new, largely the same, or the same basic operating system, but
with a new configuration. Each is ideal for various periods.

A Webserver, for example, does not run the same OS as its


clients. A protocol just needs to be accepted by the clients and
the server. Single-function network devices often provide a mini-
OS with enough applications, such as a file server, a database
server or a mail server, to perform the one task it requires.

Perhaps a server will have the same programs the clients.


Consider the example of a UNIX system with lots of UNIX desktops

41
for general use and a set of UNIX CPU servers. As mentioned in
Chapter 3, the clients will have identical cookie-cutter loads on
OS. The CPU processors would have the same boot load so it can be
configured differently with a larger variety of processes, pseudo
terminals, buffers and other parameters.

It is important to note that what is suitable for a server


App is a matter of context. While loading Solaris 2.x, you may
assume that this host is a server which ensures that all software
packages are installed, because diskless clients or those with
small hard drives that use NFS to install those packages from the
server. At the other hand, the server configuration is a simple
array of packages when installing Red Hat Linux, if you merely
want the base installation, at top of which you will load the
different software modules that would be used to create the
infrastructure. The latter gets more famous with the growth of
hard disks.

Servers need to be kept remote. Each server in the machine


room used to have its own console in the old days: a keyboard, a
video monitor or hardcopy console and probably a mouse. When SAs
packed more into their computer rooms they saved substantial space
by removing these consoles.

A KVM switch is a system that allows to share a single


keyboard, video screen, and mouse (KVM) with several devices. You
might probably fit three servers and three consoles into a single
rack, for example. With a KVM switch, however, the rack requires
just one keyboard, display, and mouse. Further servers will now
fit in there. You can save even more room by getting one KVM switch
per rack side, or one for the entire data center. Larger KVM
switches are therefore also prohibitively costly. Through using
IP-KVMs, KVMs which have no keyboard, display, or mouse, you can
save even more space. You simply link from a device client onto
another computer to the KVM console server over the network. You
can even do it from a coffee shop while connecting via VPN to your
network!

For serial port-based devices the precursor to KVM switches


was. Servers initially did not have a video card but instead had
a serial port to which a terminal was connected. Such terminals in
the computer room took up a lot of space, and also had a long table
with a dozen or more terminals, one for each server. If anyone
thought they should buy a small server with a dozen or so serial
ports and connect each port to a computer console, it was
considered quite a technical development. Now one would log in to

42
the console server and then connect to a different serial port. No
more going to the console in the living room to do it.

Serial console concentrators are now available in two ways:


home-brew or appliance. For the home-brew approach, you take a
machine for loads of serial ports and load applications — free
software like ConServer or commercial equivalents — and build it
yourself. Appliance devices are manufacturer's pre-built systems
that tend to operate more quickly and provide all their programs
in firmware or solid-state disk storage so that no hard drive can
be lost.

Serial consoles and KVM switches have the advantage of


allowing you to operate a system console while the network is down
or the system is in bad shape. For example, you can only do certain
things when booting a device, such as pressing a key sequence to
activate a basic BIOS configuration menu.

Some vendors have hardware cards which permit remote control


of the computer. This function is sometimes the differentiator
between their and other server-class computers. This feature can
be extended to third party items too.

Since a serial console receives a single stream of data from


ASCII, capturing and processing is easy. And, on a serial screen,
one can see all that has happened, going back months. It can be
helpful in detecting error signals transmitted to a screen.

Networking tools, including routers, and switches, do have


serial consoles. It can also be useful to have a serial console,
in addition to a KVM unit.

Watching what is being sent to a serial port can be


fascinating. Even if no one is logging in to a Cisco router, the
console serial port sends out error messages and alerts. The
findings often come as a shock to you.

Two of the key factors when purchasing computer equipment


will be what form of remote access is accessible to the console
and choosing which tasks require such access. During an emergency,
requiring SAs to fly to the physical location to do their function
isn't fair or timely. For non-emergency cases a System Admin would
be able to fix at least small problems from home or on the road
and be completely involved remotely, optimally, by using
telecommuting.

However, remote access has obvious limitations, because


certain tasks, such as flipping a power switch, downloading
software, or replacing damaged devices, require a individual on

43
the device. To the remote engineer, the eyes and hands may be an
on-site technician, or a friendly volunteer. Some devices allow
one to switch on / off individual power ports remotely, and hard
reboots can be performed remotely. Installing devices will however
be reserved to trained practitioners.

Remote access to controllers saves costs and improves safety


factors for System Admins. The control rooms are computer-
optimized and not human. These rooms are dark, crowded and costlier
than office space per square foot. It is impractical to fill costly
rack space with displays and keyboards, instead of extra hosts.
Making a machine-room full of chairs may be uncomfortable, if not
harmful.

Inside the computing room, SAs will never waste a usual day
working. These are poor for using SAs to fill a computer space.
Operating directly in the work room rarely satisfies ergonomic
keyboard and mouse placement standards, or environmental
specifications such as noise level. Operating in a cold computing
room does not safeguard citizens. SAs can work in an environment
that maximizes their productivity and is better achieved at their
workplaces. Significant SA products such as research books,
ergonomic keyboards, telephones, refrigerators, and audio devices
can be easily located in an office as opposed to a work room.

Machinery is not safe to have a lot of men in the control


room, too. Having users in a computing room raises the burden on
heating, ventilation and air conditioning systems. Every person
generates about 600 BTU of heat. The extra power used to
refrigerate the 600 BTU may be costly.

When you have a remote control, attention needs to be given


to the security consequences. Host security strategies also depend
on having the consoles behind a locked door. Remote access is
breaking the policy. Console applications would also have been
considering security and privacy schemes properly. For eg, you
should only allow entry via an encrypted network, such as SSH,
into the console system and rely on authentication through a one-
time login system, such as a handheld authenticator.

When purchasing a machine you would expect remote console


control. If the provider does not respond to this request, then
you can search for equipment elsewhere.

The boot drive, or drive with the operating system, is


typically the hardest to repair when it gets burnt, and we need

44
extra measures to accelerate recovery. Any server's boot Disk
should be mirrored. This is, there are two disks built, and every
modification to one is made to the other as well. When one disk
crashes the computer immediately switches to the operating disk.
That is what most computer operating systems do for you, and other
hard disk controllers do in hardware for you.

Throughout the years, the cost of the disks has fallen


considerably, making this once expensive choice more commonplace.
All disks will optimally be mirrored or secured by a RAID scheme.
When you can't do that, though, the boot disk at least mirrors
that.

Mirroring has tradeoffs on results. Reading operations get


quicker, as half can be done on each disk. Two separate spindles
on a busy server work for you, gaining significant throughput.
Writing is much slower since it needs twice as many disk reads,
though typically performed in parallel. It is less of a concern
for systems which have write-behind caches, such as UNIX. Since
typically an operating system disk is read, and not written on, a
net advantage is commonly observed.

A failed disk without mirroring causes a failure. A failed


disk with mirroring is a survivable event which you manage. Since
a defective disk may be repaired when the system is running, a
failure will result in just one component of it failing. If the
system involves replacement of damaged disks while the device is
turned off, the failure can be planned based on business needs.
This allows for outages this we regulate rather than something
that controls us.

Note a Redundant Array of Interdependent Disk mirror also


protects against hardware outrage. This does not guard against
mistakes in human software or. On the second disk, erroneous
modifications made on the primary disk are repeated immediately,
making it difficult to recover from the mistake only from using
the second disk.

With the fundamentals in mind, we are now looking at what can


be achieved in terms of reliability and serviceability, to go a
step forward. We are also summing up an alternative opinion.

45
A server is an intentionally designed instrument for a
particular purpose. Toasters do their toasting. Blenders combine.
One may use general-purpose devices to do such tasks, but there
are benefits of using a computer designed to perform one function
very well.

There are also tools in the digital world: Dedicated network


router was the first appliance. Some scoffed, if we can easily
connect and do the same thing with external interfaces to our VAX,
why will spend all the time on a device that just sits there and
transfers packets. This turned out there will be a lot of people
involved. This became clear that in many instances a computer
dedicated to a single function, and doing it well, was more
powerful than a general-purpose system which could do other tasks.
So, yes, that also meant you could uninstall the VAX without
downgrading the network.

A server appliance combines years of experience into one kit.


It's hard to build a Database. Physical hardware of a server has
all of the requirements listed earlier in this chapter, as well as
device engineering and performance tuning which can only be
performed by a highly qualified expert. Often the infrastructure
used to provide a service involves packaging various packages,
gluing them together, and developing a cohesive, unified
management system for all of them. That is a lot of work!
Appliances do everything you need right out of the box.

While a senior system administrator can create a file service


or email program from a general-purpose computer, they buy an
appliance that will relieve the SA from concentrating on other
tasks. Computer that the manufacturer has purchased results in one
less program from scratch, plus access to vendor assistance in a
network outage. Appliances also help companies obtain access to
well-designed applications without that specific knowledge.

The other advantage of appliances is that they have apps that


are often not present elsewhere. Competition is pushing vendors to
add new features, increase performance, and boost reliability. For
example, NetApp Filers provide tunable snapshots of the file
system, removing several demands for file restore.

Additional Supply of Power

The next most likely fault-prone part of a device after hard


drives is the power supply. So servers would preferably have
reliable power supplies.

46
Getting a redundant power supply does not only mean two such
machines are within the case. It means the machine will be stable
if one power supply does not work: n + 1 redundancy. Often, two
power supplies are needed for a fully charged device to receive
enough power. Redundant in this case means providing 3 power
supplies. This is an important matter that suppliers ought to
remember when purchasing servers and network equipment. Especially
network equipment is prone to this problem. Even when a large
network system is completely equipped with power-hungry fiber
interfaces, there is a minimum of dual power supplies, not
redundancy. Sellers do not really say that up front.

Each electrical supply should have its own power cable. The
most popular power problem, in realistic words, is where a power
cord is accidentally pulled out of its socket. Formal power
reliability studies frequently ignore these problems, since they
study utility capacity. For anything a single power cord does not
help you in this case! Any vendor supplying multiple power supplies
with one power cord shows ignorance of this fundamental operational
issue.

Another explanation for additional power cords is that they


need the following trick: sometimes, it is necessary to switch a
system to another power strip, UPS or circuit. Throughout this
case, different power cords allow the device to switch from one
cord to the new power source at a time, minimizing downtimes.

Each power supply can draw power from another source for very-
high-availability systems, such as separate UPSs. When one UPS
fails, otherwise the system must start. Most data centers spread
their money out with that in mind. — The power source is wired
into another power distribution unit (PDU) more often. When anyone
mistakenly overloads a PDU with two separate devices, the computer
has to remain up.

When described above, n + 1 redundancy applies to structures


designed in such a manner that one of the different components
will malfunction, and the system stays functional. Noted examples
are RAID systems that can offer a number of facilities even though
a single hard disk fails, or an Ethernet switch/ hub with extra
switch line modules so traffic can still be routed if one part of
the switch line fails.

For comparison, a fail-over configuration combines two


complete hardware sets in full redundancy. The first system
executes a task, and the second system becomes idle, waiting if

47
the first fails to take over. This failover will occur manually —
somebody realizes the first system failed and triggers the second
system — or automatically — the second system watches and triggers
the first system.

Lots of fully redundant networks for load-sharing. These


systems are fully running, and all share the operation's workload.
Each server has ample capacity to handle the entire workload of
another's service.

When one program crashes, the other system will take over the
failed counterpart's workload. The networks may be designed to
manage the functionality of each other or another process may track
the flow and distribution of requests for service.

N +1 is cheaper than full redundancy, whether 2 or more. This


is also chosen by consumers for the economic benefit. Just server-
specific subsystems are typically redundant modules n + 1 and not
the whole package. Please pay particular attention when a dealer
wants to sell you for redundancy n + 1 when only parts of the
package are redundant: if his engine is dead, it would not be
beneficial to have a car with extra tires.

The hot-swappable components should be redundant. Hot-swap


states to the ability to take away and swap a part while running
the system. Components usually will only be disabled and
substituted while the system is turned off. When the car drives
down a highway, hot-swap parts are like being able to change a
tire. It is good not to have to wait for particular issues to be
resolved.

The first advantage of the hot-swap components while the


system is running is the potential to add additional components.
You do not have to schedule a downtime for the part to load.
Installing a new component is also a scheduled event and can
typically be prepared for the next maintenance period. The actual
worth of the hot-swap pieces pops up after a failure.

The longer you wait, the heightened the risk. A SA would have
to wait before a reset can be performed without hot-swap bits to
get back into computer protection n+1. An SA can substitute hot-
swap bits for the component, without scheduling downtime. RAID
systems include the concept of a hot backup disk that is inside
the computer, unused, ready to replace a failed disk. Considering
that the system will isolate the broken disk and keep the whole

48
machine from operating, the system can automatically activate the
hot spare disk making it part of the RAID package it needs. That
is what makes machine n + 2.

The sooner the system is put back into the state of complete
redundancy, the better. RAID systems also operate slower before a
failed part is replaced, and the RAID collection is restored. More
importantly, while the system is not fully redundant, you run the
risk of failing a second disk; You lose all of your tests at that
stage. Many RAID systems can be programmed to shut down if they
run in non-redundant mode for longer than a reasonable amount of
hours.

Hot-swappable components improve a system's efficiency. How


is the extra expense justified? The extra cost is worth the absence
of downtime. If a machine has programmed downtimes once a week, so
it is fair to require the device to operate for a week at the risk
of a double malfunction, the hot-swap components will not be worth
the extra cost. When the machine has a scheduled maintenance cycle
once a year, it is more likely to warrant the cost.

If a vendor makes a point about hot-swappability, ask also


two questions: Which parts cannot be hot-swapped? Why and for how
long is operation disrupted while hot-swapping of the parts? Many
network systems have interface cards that can be switched hot, but
the CPU is not hot-swappable. Many network devices say hot-swap
capabilities but after each device has been connected do a complete
system reset. It can take seconds or minutes for the reset. Once
a drive is replaced, certain disk subsystems will interrupt the
I/O device for as long as 20 seconds. Many run for several hours
with extreme degraded performance as the data is restored onto the
replacement disk. Be sure that the consequences of component
failure are understandable. Do not believe that pieces of the hot-
swap make outages vanish. They just cut down on the downtime.

Vendors can mark components on whether they are hot-


swappable, but often do not. If the vendor does not supply
stickers, instead you do.

Additional server networking interfaces allow you to create


separate administrative networks. For example, a separate network
is widely used for backups and monitoring. Backups consume vast
volumes of bandwidth during service, and eliminating data from the
main network means that backups do not negatively impact consumers'

49
usage of the network. Simpler hardware may be used to create this
new network, thus being more reliable or, more importantly,
unaffected by main network outages. This also offers SAs a way to
get into the network during an outage such as this. This type of
redundancy solves an extremely specific problem.

While this chapter suggests paying more for server-grade


equipment because it is worth the extra efficiency and reliability,
a growing counter argument argues that it is cheaper to use cheap
clustered servers which fail more frequently. If you do a good
problem-management work, then this strategy will be more cost-
effective.

The operation of massive cloud farms will allow many


replicated servers to be installed automatically, all configured
to be exactly the same. In the absence of any web server hosting500
queries per second (QPS), you may need ten servers to handle the
5,000 QPS you expect users to reach over Internet. A load-balancing
system can disperse the charge across servers. Load balancers are
the most effective way to quickly track down machines. When one
node goes offline, the load balancer divides the requests from the
other active servers and the customers still get support.

How if you used pieces of lower consistency which would cause


10 failures? When this reduced the selling price by 10 percent,
you could buy an eleventh machine to account for the higher bugs
and poorer performance of the device. Yet you were paying the same
amount of money, having the same QPS number, and having the same
uptime. Still, any difference, right?

Servers also cost $50,000 in the early 1990's.Desktop PCs


cost around $2,000 because they were manufactured from generic
components that were mass-produced at magnitude orders greater
than server components. If you designed a server based on those
commodity pieces, it would not be able to provide the necessary
QPS, so it would be much higher for the failure rate.

Nevertheless, the economy had changed by the late 1990's. All


prices and performance had improved significantly thanks to the
continued mass production of PC-grade components. Organizations
like Yahoo! and Google have worked out how to easily handle vast
numbers of computers, streamline equipment deployment, software
upgrades, system repair management etc. It points out that if you

50
do these stuff on a wide scale the expense would go down
considerably.

Popular wisdom says that you can never start running a


commercial service on a commodity-based server that can only handle
20 QPS. Once you can handle enough of them, however, things start
changing. Continuing with the situation, you'll need to purchase
250of those servers to equal the productivity of the 10conventional
servers above. The same amount of money you would pay for the
equipment.

This form of solution was less costly than purchasing huge


servers as the QPS improved. If they produced 100 QPS output, 50
servers at a fifth of the price, or spend the same amount of
capital, you might buy the same power, and have five times the
power to process it.

The expense could be further reduced by eliminating


unnecessary components such as video cards, USB connections and so
on from such a device. Ideally one would usually buy five to ten
commodity-based servers for each large computer bought, to have
greater computing power.

The streamlining of specifications for physical hardware


culminated in more lightweight packaging, with large servers
slimmed down to a single height rack kit.

That sort of large-scale cluster computing makes huge cloud


services possible. Gradually, we can picture more and more capital
going into this sort of architecture. Another method of using blade
server technologies is to pack an increasing number of computers
into a small room. A single frame comprises multiple slots, each
with a CPU and a memory capable of holding a card, or blade.

The frame offers control and network connectivity, as well as


maintenance. Every blade sometimes has a hard disk; some require
every blade to provide access to a consolidated storage area
network. Since all the systems are identical, an automated system
can be produced in such a way that a spare is programmed as its
replacement if one dies.
One emerging technology which is increasingly relevant is the
use of virtual servers. The cloud infrastructure is now so
successful that it is more difficult to justify the complexity of
single use machines. Protection and ease of use include the concept
of a server as a collection of components (hardware and software).

51
Operating on a growing, wide cloud, numerous virtual servers. Data
offers the best of both worlds

52
This unit aims to make a server understand the hardware and
the service feature that the server provides. Service can be based
on multiple servers which function in order with each other. This
chapter explores how to create a service that meets consumer
expectations, which is reliable and that can be sustained. It also
deals with the ways on how to offer a service by removing the
hardware and software, will also makes the service reliable by
scaling the progress of the service, by monitoring, maintaining
and supporting it. Service is insincerely a service before it meets
these basic demands. One of a SA's essential tasks is to provide
the services it needs to clients. The needs of consumers must
change as their work and innovations develop. An SA, therefore,
devotes a considerable amount of time to designing and developing
new services. Whether well these programs are developed by the SA
defines how much time and money will be expended to help them in
the future and how contented the consumers will be.

There are several facilities in a habitual setting. Basic


services include DNS, e-mail, authentication, network
connectivity, and printing. Such programs are the most important,
and when they fail, they are the most evident. Services can range
from Remote Access, Network License Service, Computer Deposit
Service, Backup Services, Internet Access Service, DHCP Service,
and File Service. Those are only some of the common resources that
are normally offered by system management teams. In addition to
these are the industry-specific resources that sustain the
business or organization: accounting, production, and other
processes. Services are known as an organized computing system
that guides SAs from an area where one or more individual computers
are situated. Homes and very small businesses usually have a
handful of stand-alone computers to provide services. Larger
facilities are usually linked by shared resources to comfort
connectivity and resource optimization. Through connecting to the
Internet through an Internet service provider, a home machine uses
the facilities offered by the ISP and the other people to which
the user joins through the internet. A workplace setting offers
the same facilities, plus more.

53
At the conclusion of the chapter, the student can:
1. describe different services a computer server functions
and operations;
2. differentiate the type of servers needed and requirement
to satisfy the operation of a business; and,
3. design a network for a business that will make use of a
server for its operation.

Ask consumers to find out if their company requires to reach


expectations. The customer and the computers they rely on must be
controlled and errors will produce warnings or trouble tickets
where necessary. Usernames or domain names are designed for the
database, they rely on DNS. If the log files include the names of
the domains that used the database or were used by the service,
they use DNS; if the accessing organizations attempt to contact
other machines via the program, they use DNS. Several services
often depend on the authentication and authorization service to
recognize one user from another, particularly when different
identity-based access rates are assumed. The failure of certain
systems, such as DNS, causes all the other resources on which
cascading failures rely. Recognition of the other services it
relies upon is important when constructing a business.

Machines and software that are part of a network will be


completely reliant on hosts and software that are designed to the
same or higher requirements. A company may only be as effective as
the weakest connection in the chain of resources on which it
relies. A service may depend uselessly on hosts which do not form
part of the infrastructure. The more users who use a computer, and
the more activities that run on it, the greater the chance that
there would be poor experiences. User computers will also have
other items on board, so that consumers can reach the data they
like and utilize certain network services. Anyone who can interrupt
the authentication system will get exposure to customers that
depend on it; anyone who can subvert the DNS servers will divert
traffic to the database and potentially receive passwords.

Limiting authentication and other types of access to


computers inside the network infrastructure avoids these kinds of
threats. Servers will have the minimum needed for the business
they operate, and SAs will have access to it; so even the SAs can
sign in to do the maintenance. A SA has many choices to make as it
develops a business: from a retailer to purchase the hardware,
whether to use one or two servers for a complicated service, and
54
what amount of idleness to develop into the service. Keeping it
independent of the actual system by using application-oriented
names in the database, such as the hostname, is a vital aspect of
implementing a new application. When this function is not provided
by the OS, inform the OS manufacturer that it is necessary, and
try choosing another operating system.

The Customer Service is being created. If the company failed


to fulfill their desires, it was a misused effort to construct the
business. Customers may like chosen apps from their email clients,
so some customers place specific loads on the network, based on
the job they perform and how they set up the applications they
use. SAs ought to understand how the service impacts consumers and
how service architecture determines consumer requirements.
Collecting customer information on how to use a new service, the
highlights they need and how serious the service is to them and
the goals of accessibility and supporting the service requirement.
If a program is selected that they find burdensome to use, the
project may fail. Seek to calculate how big the customer base is
going to be with this company, and what kind of functionality they
may expect and need from it, and you can build it to the exact
scale. This is an appropriate time to describe an agreement at
service level for the new service.

An SLA defines which programs should be offered and the level


of assistance they accept. This classifies issues by priority and
generates a answer for each party, based on the time of day and

55
day of the week if the server doesn't accept a 24/7 contact. The
SLA describes a mechanism that boosts the problem if it is not
addressed within a specified date, and relies for administrators
to fix complex issues. In an arrangement in which the user pays
for a specific service, the SLA periodically outlines provisions
if the service provider refuses to meet the level of service
rendered. The SLA is continuously discussed in depth, and all
parties to the agreement decide.

The SLA phase is a platform for the SAs to understand the


standards of the consumers and set them correctly so that the
customers know what is, is not, and why. It is also a tool for
preparing what tools the project would need. The SLA will log the
customer's desires and set concrete goals in terms of usability,
affordability, efficiency and support for the SA team. The SLA
will register future expectations and resources so that both
stakeholders recognize the development plans. The SLA is a
reference that the SA team will identify throughout the design
process to insure that they meet team members and their
expectations and maintain them on track. The overall goal is to
reach the middle ground of what buyers want to expect, what is
technically possible, what is financially cheap and what the SA
team provides. Find it rather to be a consultative process and the
job is to educate the client and work together to reach a middle
ground.

The SA team might have other criteria for innovative offerings


that consumers may not see immediately. SAs ought to talk about
ways of scaling up the service without disrupting the present
business. Is it possible to carry out the advance slowly before
enforcing it on the entire organization, to check it on a few eager
people? Build the software for quick updates so it should not
require accessing the desktops, so rolling out slowly without
service interruption. For starters, clustering, slave or redundant
servers or managing high-availability hardware and operating
systems (OSs) should be the reliability requirement expected by
the consumer and software constraints anticipated by SAs. SAs will
always recognize the efficiency issues relevant to the network
between the position of the provider and the position of the
customers. The term bandwidth applies to how many data can be
transferred in a second; latency implies wait until the data is
acknowledged at the other end. A high-latency connection would
have a long trip time, irrespective of the bandwidth: the time to
go for a packet, and the answer to return.

56
The customer sends a question and awaits the response. What
if the same device is in India and the client works on a laptop in
New York? Suppose it takes half a second before the final bit of
the application arrives in India. Now the job should take 5 seconds
(one-half second for each request and answer) plus the period of
time the server completes the processing of the queries. Suppose
the links to India are a T1. Will a connection to a T3 update fix
the problem? When the latency of the T3 is the same as the T1,
then the upgrade does not alter the condition. Alternatively, the
solution is to send out all five questions concurrently and wait
before the answers return when each of them starts. Additionally,
send the server a longer SQL question that collects the responses,
summarizes them and only returns the response. Maximum processing
time is the amount of the time it takes each request to complete.

The time it takes for each request to be processed consists


of three components: submitting the question, reviewing the
response and receiving the answer. Hence, the only option for their
salesperson is to provide additional bandwidth to the consumer,
and as we have just mentioned, additional bandwidth does not fix
a latency problem. The only remedy is making the app improved.

Generally, changing the program is about rethinking


algorithms. For high-latency networks, the protocols must be
modified so that requests and responses need not be for lock-step.
One approach sends both queries at once, most compressed into a
small number of packets and assumes the responses to be efficient.
A crucial requirement for satisfying the SLA is the introduction
of a new technology into established surveillance networks.

57
Specifications and file formats are established in a public
domain, such that producers may develop specifications and deliver
products. In comparison, an organization uses proprietary
protocols and file formats to interact with fewer objects, as
protocols and file formats are susceptible to unwarned alteration
and allow device owner authorization. Often vendors use
proprietary protocols make specific license arrangements with
other vendors. There is, however, typically a difference between
the launch of a new version by one vendor and the implementation
of a compatible new version by the second vendor. Relations between
the two vendors can also fall down, and the interface between the
two can cease. This situation is a nightmare for users using all
the things and relying on their Graphical User Interface (GUI).

SAs should recognize the distinction between the protocol and


the product. SMTP is not a service, but an English-language
document explaining how bits should be transmitted over a cable.
Part of the confusion stems from the fact that organizations still
have internal specifications which define different products to be
applied and supported. That is a separate usage of the basic term.
This condition provides the impression that a protocol is very
much a particular package of applications and cannot function as
an individual pattern.

The Web has taught customers of the distinction between


protocols and goods, and many retailers profit from the customer's
misunderstanding about transparent protocols. Vendors are scared
of the market and would rather kill them by trapping individuals
into structures as they relocate, they would have trouble doing
so. These vendors make a deliberate attempt to obscure the
difference between protocol and service.

Clear business rationale for using free protocols: this gives


better choices as you will select the right service and system
instead of selecting the incorrect choice, such as the highest
client or the lowest computer. Customers want an application which
has the usability and user friendliness they need. Either the
consumers or the SAs have historically more control and agree in
private, shocking the other with the decision. Customers' decision
can well be package and challenging to handle, would turn out to
be challenging in order to have excellent service. Customers are
entitled to select the app best tailored to their preferences,
prejudices and even platform.

SAs can choice between server products rather than depending


on the hard-to-operate server software and platform needed for a
client application. We name this the power to decouple the options
from client and server. For example, the following illustration
shows what may happen when customers select a proprietary-mail

58
program that does not use straightforward protocols but adapts to
their customer needs. The other advantage of making transparent
networks is that they do not require gateways to the rest of the
world. Gateways are the "glue," binding different structures.
Reducing operation counts is a wonderful idea.

Simplicity approach can be achieved because it is efficient,


easy to manage, easy to build, and easy to fuse with other systems.
Undue difficulty leads to confusion, errors and future
implementation difficulties, which can make it all slower. It will
be more costly in startup costs and running costs. One program has
20 basic features and another one has 200 additional features. We
may conclude that the more feature-rich programs would trigger
additional errors, and the creator would consider it more
challenging to maintain the system code. Often one or two consumer
or SA specifications can increase the complexity of the program
considerably. If this applies over the design phase to certain
criteria, it is worth going back to the root and reassessing the
importance of the criterion.

Clarify that these situations may apply to consumers or SAs


but at a stability, service prices and ongoing repair costs. Then
challenge them in this context to rethink certain conditions and
decide to whether they should be fulfilled or removed. Let's go
back to our two salespeople giving details. Often the plan with
the 20 simple features doesn't have any of the features needed and
could be tempted to refuse the bid. Customers who appreciate the
value of usability, on the other hand, may be able to reject these
features and achieve higher efficiency.

When choosing the hardware and software of a company, speak


to sales engineers to discover how to better tailor a product.
Hardware developers also provide technical requirements built for
particular applications, such as databases or web servers. When
the company produces a specific one, then the provider can provide
a suitable canned setup. When there are more than one server vendor
in the system and more than one server vendor has a suitable
commodity then using this scenario. Get those vendors to lobby for
the market against each other. Since probability of getting more
for the same price has a set budget to increase performance,
reliability, or scalability. Get a decent deal, and in any other
way be able to have the profit in developing the service. And if
59
a provider is selected, do not reveal the option until it is clear
that this is the best price possible. While selecting a vendor,
specifically for a software product, it is crucial to consider the
direction the vendor is putting the product on. This will be likely
to get interested in test trials in a cumbersome implementation
and affect the course of the company by showing the project manager
what functionality should be important for the future.

Central tools, such as authentication or directory services,


are essential for the product trail, or know that the provider no
longer supports the program. The implication that a key piece of
infrastructure needs to be changed may be massive. Seek to push
vendors, if necessary, who primarily build the product on the model
that has been used, rather than porting it to another model. The
software would have less glitches, take new features first, and
get more support on its primary platform for development. Support
for the network is much less likely to cease for vendors.

Clients should often use a program that uses a common name


depending on the intent of the company. Clients can submit their
calendar users to a diary-defined database, emails from clients to
a Post Office Protocol (POP) server, an Internet Message Access
Protocol (IMAP) platform, and a Simple Mail Transfer Protocol
(SMTP) program known as the mail service. Some of these services
exist on the same machine and can be accessed by function-based
names to allow volume by breaking the service across several
devices without reconfiguring.

The computer will not ever be assigned a primary name


dependent on purpose. Calendar server might be named "dopey," but
it can never have a calendar tag, because the task will need to be
moved to various computers. Changing the name for the feature is
easier, since other objects on the original machine that are
related to the main name are not intended to transfer to the
current machine. Programs with an IP address instead of a name
provide the network with the application running on many specific
IP addresses in addition to the main IP address and a specific
address for each operation. The computer address and device can
then be fairly quickly transferred to another machine. Mind how to
switch it in the future into another system when creating process
on a server. Somebody has to move it out. Making life as easy as
probable for that person by well planning it from the beginning.

60
Service is somewhat reliant on clients, either precisely or
essentially by other devices and systems that rely on it. Customers
believe the service should be accessed anytime they require it to
be available. The delivery of a basically high degree of
availability is an integral piece of building infrastructure,
which involves filing all the equipment connected to that
infrastructure into a secure data center environment. The
datacenter offers shielded fuel, plenty of ventilation, well-
ordered humidity necessary for management in dry or humid
environments, and a safe environment where the system is excluded
from accidental damage or detachment.

Another rationale for placing servers is because a server


requires a far higher network transmission level than its clients,
as it has to be able to connect with other clients at an acceptable
pace. A node is most also connected to several networks, including
several logistical ones, to reduce latency through the backbone of
the network. High-speed network cabling and equipment are
typically pricey to use as they first come out and hence must first
be built in the data center's small area, where spreading to
multiple hosts is relatively inexpensive and the most important.

Both machines that make up the system should be housed in the


data center to take advantage of high-speed networking. Both
variables of the operation may rely on something that is operating
on a device that is not in the data center. The service is just as
effective as the lowest link in the chain of processes that
continue to operate for connectivity to the service. A system that
is not in as table setting on a network is more likely to crash,
allowing the server to break. Based about what's going on a device
that is not in the data center, find a way to fix the case: Move
the program to the data center, transfer the process to the data
center device, or delete the dependence from the less reliable
application.

Would a customer wander the computer room, sit down on the


keyboard and watch a vital website, and login to search email only?
When done, did the customer turn off the machine's control, because
it was merely another desktop? Restricted exposure to automatic
verification for domains that are part of the infrastructure. Allow
just the SAs responsible for the activity to sign in to the device,
whether on the monitor or through remote access. The user can
61
crash, reset or shut down the system. Worst of all, anyone logging
in will get managed access through the console. Example, at least
one Windows-based email system involves an individual signing in
to the console to read all email messages on the computer. The
more users who actually sign in to a computer, the more likely
they are to fail.

A client who knows how to sign into a different computer for


a particular task, such as checking email, will eventually start
running certain programs that manage the CPU, memory, and I/O
system. The person may be undesirably influencing the service
without knowing it. The safest way to do is to open a ticket with
the SA group to encourage them to fix project issues.

Nonetheless, the fast and easy thing the user has to do is


sign in to the program and execute machine tasks; the client will
now save the data as a local disk, preventing any network delays.
Perhaps the user who can sign in does, without realizing the
system's impact. Customers handle their work on the NFS site,
depreciating the function of the NFS program, becoming more
volatile and undependable, resulting in people executing their
duties directly on the site. Knowing the situation becomes much
simpler and trying to correct the root issue as soon as the first
person sees it.

Possessing reliable servers in the service as parts is an


alternate way of keeping the infrastructure efficient as a whole.
Eventually, if this infrastructure is intended to be available to
users at other locations, imagine installing backup devices at
another location that would serve as a replacement if a
catastrophic breakdown happens at the main site. Service subsystem
can be tightly coupled apart from the common parts that use the
same power source and network infrastructure, so the device is
focused on as few components as possible.

Assume, for example, that the system for remote access is


standardized and that part of the mechanism is a new, more secure
form of authentication and authorization. The framework is created
with three components: the box concerned with remote
communications, the server making sure people are who they think
they are, and the server specifying the places people are able to
enter. When the three are on separate power supplies, failure of
the power source of somebody would cause failure of the whole
operation. If the electricity supply remains the same, faults from
other power supplies should keep the activity unchanged.

62
Nevertheless, because they are linked to the same network,
the process would remain inoperative even though the connection
fails. On the other side, if they are distributed around three
networks, with several different switches and routers
participating in inter-component communications, still more
modules can malfunction and disconnect the service. Keeping it as
efficient as possible is the single most successful way to keep it
as straightforward as possible.

A service's reliability can be checked by separating critical


parts before servers and resources are reached which depend on
nothing else. Could the Web also be made open to the remote
location because there is a link to the main site? Is it fair to
provide the facility at the remote site yet? Of example, name help
should be required for both sides when a link is broken, as there
are many stuff people at a remote location rely solely for
computers at that stage. A remote access provider or remote
connectivity at different locations may ensure that people are
connecting, even though the connection is dead. To order to address
all these issues, when access is restored, the system must be
required to provide backup servers to distant offices to manage
the resynchronization of databases.

However, ensuring that the program is still accessible in


remote offices when their link has been broken is unlikely to be
successful in developing a massive database or file system. At the
other side, because each client group were split into a separate
file system, at maximum about one third of the clients will be
unable to function through an interruption.

External providers, or daemons, will also be permitted on the


basis of separation equipment, costs and staffing. If the software
builds a written in one of the modern frameworks or “daemon”, and
the connectivity between components is over a network interface,
consider placing each component on a single machine or splitting
it across several devices.

Safety, efficiency, or scaling considerations can decide the


option. For instance, by setting up a website with a database, but
the database on a different computer, in order to adjust it for
database access, shield it from unknown Internet connectivity, and
enhance the front end of the service by improving further web
servers in parallel without needing to place a hand on the database
computer.

63
In addition, one of the elements may be used mainly for this
one application only but may be used for other applications later.
For eg, a calendar service that uses a Lightweight Directory
Directory Access Protocol (LDAP) server, which is the primary
service enabled by LDAP. Are the calendar service and the archive
service to live on the same or separate machine? For example, other
programs may occur in LDAP software, it is found on a given
computer, not on a common one to upgrade and fix the calendar
component independently from LDAP software. Two programs or
daemons will also be absolutely bound together, and can never be
used independently. In this case, all other items being equal,
having them both on the same computer makes sense, because the
operation depends on only one computer rather than two.

A building service aspect focuses on the equipment, software


and resources provided by the customers. Centralization means the
software, programs and facilities are managed on a single
collection of servers by a central organization of SAs rather than
by separate business entities who duplicate one's work and their
servers. The main assistance desk offers resources for all
programs. Centralizing and constructing programs in traditional
ways makes them easier to help and reduces the cost of preparation.

In order to ensure good support for the company that the


consumer counts on, the SA team as a whole has to understand that
as well. That ensures that each software should be fully
incorporated into the helpdesk phase and, where appropriate,
should utilize the hardware of a standard vendor. The service will
be structured and recorded in a consistent manner so that the SA
responding to the call of help knows where to find it and can react
quicker.

So, similar instances of operation can be difficult to


sustain. For example, one should provide the helpdesk staff with
a way to address which print server is connected with a particular
customer calling with a question. Many devices, such as
telecommunications, authentication and networking, are part of the
Internet and need to be organized. Such networks should be rendered
for large sites with a central hub that serves knowledge from and
to centralized geographic and organizational structures. More
resources, such as file systems and CPU farms, on departmental
borders are inevitably consolidated more.

64
Beware about the efficiency characteristics when planning a
system, even though many more challenging technical challenges
need to be tackled. Solving all the tough things when the operation
is inefficient would not be deemed a positive by the citizens who
use it. Check the computer to survive the depreciation as a load
to it. For order to create a business that is doing well, one needs
to consider how it operates and even look for ways to better
isolate it across many devices.

From the outset consider how to align device efficiency with


that use and expectations above what the original system would
achieve. Generate an arbitrary load on a device with load checking,
to see how it reacts. For e.g., render100 hits per second on a web
server and measure the latency or average time taken to fulfill a
single request. As testing indicates that the app performs
effectively for limited simultaneous users, how much resources
will RAM, I / O and easily be accessed before the program is
created and utilized instantaneously by hundreds or thousands of
people?

Consider how the service performs when choosing the machines


to run the service. If the storage keeps huge data tables in
memory, consider a number of quick access and large disk caching
on the servers. Since it is a network-based system that transmits
large quantities of data to customers or from servers around the
world, it will establish a range of high-speed network interfaces
and find solutions to control traffic from such interfaces.

Likewise, look at clustering tools and devices that allow


closely interconnected clusters or machines running the same
service to look like a single entity. If a service has a lot of
network traffic, it must rapidly develop to produce rational remote
outcomes, particularly if it is not configured to place one or two
servers at each remote location. Standard of service or
sophisticated queuing systems can be enough in certain cases to
make the results reasonable. Look at opportunities to reduce
network traffic among others.

Service is not accomplished and cannot properly be considered


a service unless it is checked for supply, issues and efficiency,
and the processes for capability preparation are in place. To get
going more easily, too many customers are upset by the challenges,
the helpdesk or the frontline support team needs to be told
65
directly about the difficulties of the company. A customer who
needs to regularly find a big service problem and phone up to fix
it before someone tries to check into the matter gets a very poor-
quality level. Customers do not want to think that they are the
only ones who pay attention to machine issues.
At the other side, observed and established problems before
they are understood are like trees falling into the forests with
nobody around to hear them. For starters, if an error happens over
the weekend and is alerted in time to repair it before Monday
morning, the consumers do not even have to know if something went
wrong. (In this case, it will be announced by email that the
question has been solved so that the administrator gets a credit).

The SA party will also constantly test the operation from a


capacity-planning perspective. Capacity planning may include
network speed, storage capacity, transaction levels, licensing and
availability of physical hardware depending on the operation.
Logically, SAs should be expected to expect to be part of every
company and prepare for growth. To do so efficiently, control of
use must be established as a part of the operation.

The way the new service is rolled out to the consumers is


every bit as important as the way it is planned. The rollout and
the first interactions the consumers have with the company will
influence how they view the service in the future. Core pieces in
making a strong effect ensuring that available documents and
helpdesk safely informed on the new program, including all current
support processes. Nothing is worse than getting a problem with an
unknown program and finding out that by applying for assistance
nobody appears to discern anything.

Preferably, no program or modification would be needed for


the operation, because it is inconvenient for consumers and reduces
upkeep, while frequent deployment of new equipment is necessary.

In addition, building a stable infrastructure, tracking,


simple servicing and assistance, and having all the essential
service needs of the customer, certain other issues will be taken
into consideration. One of the fundamentals of big businesses is
the management of dedicated computers. The other norm which would
be targeted at designing infrastructure is to make them fully
redundant. Some services are so important they have to have full

66
redundant irrespective of the scale of the business. Consider the
others fully obsolete as the company evolves.

Big sites will be able to streamline this process, based on


customer needs, but small sites would find it much harder to
explain. Using dedicated computers for each operation allows them
more efficient, simplifies debugging when there are failure
failures, reduces the frequency of outages, and certifies that
improvements and capacity preparation become far simpler. Growing
sites typically end up with a single administration unit that is
the center of all essential resources. It provides other services
such as identification, authorization, paper, email and other.

Essentially, owing to the increased demand, this system would


have to be broken up and the resources spread over multiple
machines. Perhaps this system has so many resources and obligations
by the time the SAs get money for more operating equipment that it
is very difficult to break it out. Dependence on the IP address is
the hardest to contend with when splitting networks from one system
to another. Most providers offer hard-coded IP addresses for all
clients; network devices, such as firewalls and routers, also have
several hard-coded IP addresses within their configurations. This
becomes very difficult to divide a center-of-the-universe host
into separate hosts and it gets more so the longer it persists and
the more resources provided thereon.

In the case of loss, having an equivalent server or group of


servers available to take over from the main system is considered
complete redundancy. A secondary takeover operation from an
inadequate primary can occur in various ways: human interference,
automated after primary crashes, or primary and secondary workload
sharing before one fails because the remaining server is
responsible for the workload. The level of redundancy on the
infrastructure is variable. The software used to manage a service
can decide that the redundancy is in the form of a live passive
slave server which only reacts to requests when the master server
crashes.

The replication function must ensure, in all situations, that


data continuity occurs and that data confidentiality is
maintained. In the case of clustered servers and other conditions
under which computers operate alongside the main computers, the
machines are used to spread the load and improve the performance
under normal conditions. Use this system take caution not to cause

67
the load to hit the point where deployment would be inappropriate
if one of the servers crashed. Until hitting that stage, add more
servers in parallel with current ones.

Many resources are so vital to a site's minute-to-minute


operation that they are made entirely obsolete in the site's
existence quite early on. Name and authentication providers are
typically the first to provide total redundancy, partly because
the program is configured for secondary servers and partly because
they are so important. The advantage of this flexibility is that
it makes updates simpler: It is easy to introduce a rolling update.
The different host's interruption doesn't interrupt the full
service, although it can affect efficiency.

68
69
Knowing the individual components of a standard business
contract will scale up the process with much greater precision and
performance. Data involves the memory used for the storage process
of the system, the number and sum of packets in a process, the sum
of available sockets for a request to be served. Person service
process with dataflow, required transaction allows it to take
place, Internet name via DNS, in order to get a proper image of
the process. Also issues that are outside of reach technologically,
such as root-level name server output in DNS, will influence the
model.

Investigating a scaling loss of service is relatively common,


and discovering that the service itself has a bottleneck elsewhere
in the network. When the dataflow model correctly demonstrates the
service solving process and scaling problems by thinking which
part of the dataflow model is the weak stage, testing each
component under real or virtual conditions and seeing how it fails
and behaves. Through imitating it on a second server and splitting
the requests between the two, increasing the database to be 200QPS,
the website can handle as many visits per second, if there is no
other bottleneck.

Checking through the logs will get a think into the system's
utilization patterns: how many users are immediately on during
certain portions of the day relative to the overall number of web
users. For explore tools, consider that an archive file of any
kind, or even the whole mailbox, is loaded into memory by the IMAP
Server. If that is the case, consider the total size of the data
to be loaded, which can be measured as a strict total of all the
index files of the customers; as a mean or average, depending on
where more index files appear in the volume curve, or also by
inserting only the index files used at peak time of use and
rendering such measurements.

Finally, stand back and do some sort of evaluation with all


the moves in the dataflow. Consumer desktop includes internal
request-lookup mail service that will be included as the load on
the system in the dataflow assessment. Using a webmail client,
consumers can use services on a web site, and the program connects
to the mail system. Webmail service provides local authentication
and makes additional name lookup keys to the IMAP site, to the
database server and then a database process. On a sample setup, it
could take some traffic analysis, as well as vendor records, device
traces, and so on, to obtain reliable figures of the sort needed
for the huge-scale planning.

70
The SA is doing its part of the job to determine whether to
help and manage each operation, continuity in how well it does its
work. This sees customer needs and eventually consumers should be
satisfied with the SA team's work. Creating programs that offer
improved customer support, either directly by providing a service
they need, or indirectly by making the SA team more successful.

System Manager may create an optimized infrastructure, such


as constructing specialized servers, simplifying the operation,
inspecting servers and their software, maintaining business
policies, and centralizing the infrastructure from a small system.
Many approaches to create quality infrastructure include looking
into the potential of improvement and repair programs past the
original specifications. Keeping the operation as self-sufficient
as possible for the equipment on which it runs is one of the
primary methods in making it easy to manage and update. The final,
but probably most obvious, part of creating a new service is to
carry out the service quickly with the least inconvenience to the
customers.

71
Assessing learning

Activity 1
Name : _____________________________________________________ Date: ________________
Course/Year/Section: ____________________________________

1. List all the services in the computer environment. What hardware and software make
up each one? List their dependencies.
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

2. Select a service that can predict requiring design. What is needed to do to sustain the
suggestions in this chapter? How to roll out the service to the clients?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

3. What services rely on machines that do not have to be in the machine room? How to
eradicate those dependencies?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

4. What services should be needed to be monitored? How to develop the monitoring to


be more service-based more exactly than simply machine-based? Does the
monitoring system open trouble tags or page people appropriately? If not, how
difficult would it be to insert that functionality?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
72
5. In a machine that has numerous services running on it, how will be divided up so that
each service runs on dedicated machines? What would the effect on the customers be
during that process? Would this benefit or upset service?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

6. How to perform volume planning? Is it satisfactory, or what other ways to expand it?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

7. What services have maximum redundancy? How is that redundancy delivered? Are
there other services that should be enhanced to redundancy?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

8. Review the discussion of bandwidth versus latency. What would the mathematical
formula look like for the two suggested solutions: batched requests and windowed
requests?
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________

73

You might also like