You are on page 1of 26

Computers types

I, Computer: Definition

A computer is a machine that can be programmed to manipulate symbols. Its principal characteristics are:

It responds to a specific set of instructions in a well-defined manner.
It can execute a prerecorded list of instructions (a program).
It can quickly store and retrieve large amounts of data.

Therefore computers can perform complex and repetitive procedures quickly, precisely and reliably. Modern
computers are electronic and digital. The actual machinery (wires, transistors, and circuits) is called
hardware; the instructions and data are called software. All general-purpose computers require the following
hardware components:

Central processing unit (CPU): The heart of the computer, this is the component that actually executes
instructions organized in programs ("software") which tell the computer what to do.
Memory (fast, expensive, short-term memory): Enables a computer to store, at least temporarily, data,
programs, and intermediate results.
Mass storage device (slower, cheaper, long-term memory): Allows a computer to permanently retain
large amounts of data and programs between jobs. Common mass storage devices include disk drives
and tape drives.
Input device: Usually a keyboard and mouse, the input device is the conduit through which data and
instructions enter a computer.
Output device: A display screen, printer, or other device that lets you see what the computer has
accomplished.

In addition to these components, many others make it possible for the basic components to work together
efficiently. For example, every computer requires a bus that transmits data from one part of the computer to
another.

II, Computer sizes and power

Personal Computers Workstations Minicomputer Mainframes Supercomputers
s
Least powerful Most powerful

Computers can be generally classified by size and power as follows, though there is considerable overlap:
Personal computer: A small, single-user computer based on a microprocessor.
Workstation: A powerful, single-user computer. A workstation is like a personal computer, but it has a
more powerful microprocessor and, in general, a higher-quality monitor.
Minicomputer: A multi-user computer capable of supporting up to hundreds of users simultaneously.
Mainframe: A powerful multi-user computer capable of supporting many hundreds or thousands of users
simultaneously.
Supercomputer: An extremely fast computer that can perform hundreds of millions of instructions per
second.
Supercomputer and Mainframe

Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very
expensive and are employed for specialized applications that require immense amounts of mathematical
calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of
supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy
research, electronic design, and analysis of geological data (e.g. in
petrochemical prospecting). Perhaps the best known supercomputer
manufacturer is Cray Research.

Mainframe was a term originally referring to the cabinet containing the central processor unit or "main
frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs
in the early 1970s, the traditional big iron machines were described as "mainframe computers" and
eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of
supporting hundreds, or even thousands, of users simultaneously. The chief difference between a
supercomputer and a mainframe is that a supercomputer channels all its power into executing a few
programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In
some ways, mainframes are more powerful than supercomputers because they support more simultaneous
programs. But supercomputers can execute a single program faster than a mainframe. The distinction
between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to
market its machines.

Minicomputer
It is a midsize computer. In the past decade, the distinction between large minicomputers and small
mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But
in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users
simultaneously.

Workstation

It is a type of computer used for engineering applications (CAD/CAM), desktop publishing, software
development, and other types of applications that require a moderate amount of computing power and
relatively high quality graphics capabilities. Workstations generally come with a large, high-resolution
graphics screen, at large amount of RAM, built-in network support, and a graphical user interface. Most
workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a
diskless workstation, comes without a disk drive. The most common operating systems for workstations are
UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However,
workstations are typically linked together to form a local-area network, although they can also be used as
stand-alone systems.
N.B.: In networking, workstation refers to any computer connected to a local-area network. It could be a
workstation or a personal computer.

Personal or micro
Computers for personal use come in all shapes and sizes, from tiny PDAs (personal digital assistant) to
hefty PC (personal computer) towers. More specialized models are announced each week - trip planners,
expense account pads, language translators...

Hand-held (HPC)
Laptop/Notebook
PDA Tablet PC

Types of computer according to technology

Hybrid computers are computers that comprise features of analog computers and digital computers. The
digital component normally serves as the controller and provides logical operations, while the analog
component normally serves as a solver of differential equations.

An analog computer (spelled analogue in British English) is a form of computer that uses electrical[1],
mechanical or hydraulic phenomena to model the problem being solved. More generally an analog computer
uses one kind of physical quantity to represent the behavior of another physical system, or mathematical
function. Modeling a real physical system in a computer is called simulation.

A digital system is one that uses discrete words such as electrical voltages, representing numbers or non-
numeric symbols such as letters or icons, for input, processing, transmission, storage, or display, rather than
a continuous range of values (ie, as in an analog system).
The distinction between "digital" and "analog" can refer to method of input, data storage and transfer, or the
internal working of a device.
How computers work
Main articles: Central processing unit and Microprocessor
A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the
memory, and the input and output devices (collectively termed I/O). These parts are interconnected by
busses, often made of groups of wires.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are
collectively known as a central processing unit (CPU). Early CPUs were composed of many separate
components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit
called a microprocessor.
Control unit
Main articles: CPU design and Control unit
The control unit (often called a control system or central controller) directs the various components of a
computer. It reads and interprets (decodes) instructions in the program one by one. The control system
decodes each instruction and turns it into a series of control signals that operate the other parts of the
computer.[11] Control systems in advanced computers may change the order of some instructions so as to
improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps
track of which location in memory the next instruction is to be read from.[12]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.
The control system's function is as follows—note that this is a simplified description and some of these steps
may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other
systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device).
The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to
perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output
device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by
calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be
read from a place 100 locations further down the program. Instructions that modify the program counter are
often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often
conditional instruction execution (both examples of control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an instruction is
in itself like a short computer program - and indeed, in some more complex CPU designs, there is another
yet smaller computer called a microsequencer that runs a microcode program that causes all of these events
to happen.
Arithmetic/logic unit (ALU)
Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or
might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can
only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit
with limited precision. However, any computer that is capable of performing just the simplest operations can
be programmed to break down the more complex operations into simple steps that it can perform. Therefore,
any computer can be programmed to perform any arithmetic operation—although it will take more time to
do so if its ALU does not directly support the operation. An ALU may also compare numbers and return
boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other
("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating
complicated conditional statements and processing boolean logic.
Superscalar computers contain multiple ALUs so that they can process several instructions at the same time.
Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform
arithmetic on vectors and matrices.
Memory
Main article: Computer storage
Magnetic core memory was popular main memory for computers through the 1960s until it was completely
replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell
has a numbered "address" and can store a single number. The computer can be instructed to "put the number
123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468
and put the answer into cell 1595". The information stored in memory may represent practically anything.
Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU
does not differentiate between different types of information, it is up to the software to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits
(called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127.
To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When
negative numbers are required, they are usually stored in two's complement notation. Other arrangements are
possible, but are usually not seen outside of specialized applications or historical contexts. A computer can
store any kind of information in memory as long as it can be somehow represented in numerical form.
Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more
rapidly than the main memory area. There are typically between two and one hundred registers depending on
the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main
memory every time data is needed. Since data is constantly being worked on, reducing the need to access
main memory (which is often slow compared to the ALU and control units) greatly increases the computer's
speed.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only
memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded
with data and software that never changes, so the CPU can only read from it. ROM is typically used to store
the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the
computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized
program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive
into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not
have disk drives, all of the software required to perform the task may be stored in ROM. Software that is
stored in ROM is often called firmware because it is notionally more like hardware than software. Flash
memory blurs the distinction between ROM and RAM by retaining data when turned off but being
rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM
so its use is restricted to applications where high speeds are not required.[13]
In more sophisticated computers there may be one or more RAM cache memories which are slower than
registers but faster than main memory. Generally computers with this sort of cache are designed to move
frequently needed data into the cache automatically, often without the need for any intervention on the
programmer's part.
Input/output (I/O)
Main article: Input/output
Hard disks are common I/O devices used with computers.
I/O is the means by which a computer receives information from the outside world and sends results back.
Devices that provide input or output to the computer are called peripherals. On a typical personal computer,
peripherals include input devices like the keyboard and mouse, and output devices such as the display and
printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices.
Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics
processing unit might contain fifty or more tiny computers that perform the calculations necessary to display
3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in
performing I/O.

Registers are fast memory, almost always connected to circuitry that allows various arithmetic, logical,
control, and other manipulations, as well as possibly setting internal flags. Most early computers had only
one data register that could be used for arithmetic and logic instructions.
Data registers are used for temporary scratch storage of data, as well as for data manipulations (arithmetic,
logic, etc.). In some processors, all data registers act in the same manner, while in other processors different
operations are performed are specific registers. Address registers store the addresses of specific memory
locations. Often many integer and logic operations can be performed on address registers directly (to allow
for computation of addresses).

Personal computer:

It can be defined as a small, relatively inexpensive computer designed for an individual user. In price,
personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based
on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses
use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet
and database management applications. At home, the most popular use for personal computers is for playing
games and recently for surfing the Internet.

Personal computers first appeared in the late 1970s. One of the first and most popular personal computers
was the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new
models and competing operating systems seemed to appear daily. Then, in 1981, IBM entered the fray with
its first personal computer, known as the IBM PC. The IBM PC quickly became the personal computer of
choice, and most other personal computer manufacturers fell by the wayside. P.C. is short for personal
computer or IBM PC. One of the few companies to survive IBM' s onslaught was Apple Computer, which
remains a major player in the personal computer marketplace. Other companies adjusted to IBM' s
dominance by building IBM clones, computers that were internally almost the same as the IBM PC, but that
cost less. Because IBM clones used the same microprocessors as IBM PCs, they were capable of running the
same software. Over the years, IBM has lost much of its influence in directing the evolution of PCs.
Therefore after the release of the first PC by IBM the term PC increasingly came to mean IBM or IBM-
compatible personal computers, to the exclusion of other types of personal computers, such as Macintoshes.
In recent years, the term PC has become more and more difficult to pin down. In general, though, it applies
to any personal computer based on an Intel microprocessor, or on an Intel-compatible microprocessor. For
nearly every other component, including the operating system, there are several options, all of which fall
under the rubric of PC
Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The
principal characteristics of personal computers are that they are single-user systems and are based on
microprocessors. However, although personal computers are designed as single-user systems, it is common
to link them together to form a network. In terms of power, there is great variety. At the high end, the
distinction between personal computers and workstations has faded. High-end models of the Macintosh and
PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems,
Hewlett-Packard, and DEC.

III, Personal Computer Types

Actual personal computers can be generally classified by size and chassis / case. The chassis or case is the
metal frame that serves as the structural support for electronic components. Every computer system requires
at least one chassis to house the circuit boards and wiring. The chassis also contains slots for expansion
boards. If you want to insert more boards than there are slots, you will need an expansion chassis, which
provides additional slots. There are two basic flavors of chassis designs–desktop models and tower models–
but there are many variations on these two basic types. Then come the portable computers that are computers
small enough to carry. Portable computers include notebook and subnotebook computers, hand-held
computers, palmtops, and PDAs.

Tower model
The term refers to a computer in which the power supply, motherboard, and mass storage devices are stacked
on top of each other in a cabinet. This is in contrast to desktop models, in which these components are
housed in a more compact box. The main advantage of tower models is that there are fewer space
constraints, which makes installation of additional storage devices easier.

Desktop model
A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the
computer. Desktop model computers are broad and low, whereas tower model computers are narrow and
tall. Because of their shape, desktop model computers are generally limited to three internal mass storage
devices. Desktop models designed to be very small are sometimes referred to as slimline models.

Notebook computer
An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and
are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook
computer and a personal computer is the display screen. Notebook computers use a variety of techniques,
known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of
notebook display screens varies considerably. In terms of computing power, modern notebook computers are
nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives.
However, all this power in a small package is expensive. Notebook computers cost about twice as much as
equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run
them without plugging them in. However, the batteries need to be recharged every few hours.

Laptop computer
A small, portable computer -- small enough that it can sit on your lap. Nowadays, laptop computers are more
frequently called notebook computers.

The Five Generations of Computers
The history of computer development is often referred to in reference to the different generations of
computing devices. Each generation of computer is characterized by a major technological development that
fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more
powerful and more efficient and reliable devices. Read about each generation and the developments that led
to the current devices that we use today.
First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often
enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal
of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers
relied on machine language to perform operations, and they could only solve one problem at a time. Input
was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC
was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was
invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far
superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient
and more reliable than their first-generation predecessors. Though the transistor still generated a great deal
of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-
generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly,
languages, which allowed programmers to specify instructions in words. High-level programming languages
were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also
the first computers that stored their instructions in their memory, which moved from a magnetic drum to
magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
Third Generation - 1964-1971: Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors
were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed
and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards
and monitors and interfaced with an operating system, which allowed the device to run many different
applications at one time with a central program that monitored the memory. Computers for the first time
became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were
built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of
the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the
central processing unit and memory to input/output controls - on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh.
Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and
more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which
eventually led to the development of the Internet. Fourth generation computers also saw the development of
GUIs, the mouse and handheld devices.
Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are
some applications, such as voice recognition, that are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and
nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation
computing is to develop devices that respond to natural language input and are capable of learning and self-
organization.

An Introduction to DOS

Introduction
With the advent of graphical user interfaces (GUIs) like Microsoft Windows, the idea of actually using a
command line prompt and typing in commands at the keyboard strikes many people as quite old-fashioned.
Why go to the trouble to memorise commands and their syntax when you can just select them from menus,
fill in dialog boxes with parameters, or drag files onto application icons? In fact, there are many things that
are easier to do from a command prompt than in a GUI; how easy is it in a GUI to rename all files with a
".cxx" extension so that they have a ".cpp" extension instead?
Anyway, let' s begin at the beginning. DOS stands for "Disk Operating System", and is a generic name for
the basic IBM PC operating system. Several variants of DOS are available, including Microsoft' s version of
DOS (MS-DOS), IBM' s version (PC-DOS), and several others. There' s even a free version of DOS called
OpenDOS included on this CD.
There are actually several levels to DOS. At the lowest level is the BIOS (Basic Input/Output System) which
is responsible for managing devices like the keyboards and disk drives at the simplest possible level (e.g. the
BIOS lets you say things like "get me sector 5 of track 3 from disk drive 1"). This is done partly by software
in ROM (read-only memory) and partly by BIOS extensions which are loaded when the system first starts up
(with MS-DOS, these are in a file called IO.SYS; on PC-DOS they' re in IBMBIO.COM).
The second layer provides a set of higher level services implemented using the low-level BIOS services; you
can now refer to disk drive 1 as "drive A:" and instead of referring to specific sectors or tracks you can refer
to files by name (e.g. LETTER.TXT). You can also treat devices as if they were named files, so that for
example you can use the name PRN: to refer to the printer. In other words, this level provides a file system
(you can refer to files by name and let DOS worry about translating the name into a physical location) as
well as some device independence (you don' t have to differentiate storing text in a file from sending it to the
printer). This layer of the system is implemented by another file which is loaded when the system first starts
up (the file is called MSDOS.SYS on MS-DOS systems, and IBMDOS.COM on PC-DOS systems).
The third layer is the command interpreter (or shell), which is what most people think of as DOS (it' s not,
but it' s what you interact with when you use the computer, so it' s an understandable confusion). This is
contained in another file called COMMAND.COM, which is just an ordinary program that is started
automatically (and you can replace it with something better, for example 4DOS, a shareware shell included
on this CD). The shell' s job is to display a command prompt on the screen to let you know you' re supposed
to type something, then to read a line of text that you type, and to interpret it as a command, which usually
involves loading another program into memory and running it. When it' s finished doing this, it displays
another prompt and waits for you to type in another command.

The file system
Before we go any further, it would be a good idea to look at the DOS file system. The file system lets us
store information in named files. You can call a file anything you like which might help you remember what
it contains as long as you follow certain basic rules:
1. File names can be up to 8 characters long. You can use letters and digits but only a few punctuation
marks (! $ % # ~ @ - ( ) _ { }). You can' t exceed 8 characters or use spaces or characters like * or ?
or +. Names are case-insensitive, i.e. it doesn' t matter whether you use capitals or lowercase letters;
"A" and "a" are treated as the same thing.
2. File names can also have an extension of up to three characters which describes the type of file.
There are some standard extensions, but you don' t have to use them. Examples include COM and
EXE for executable programs, TXT for text files, BAK for backup copies of files, or CPP for C++
program files. The extension is separated by a dot from the rest of the filename.
For example, a file called FILENAME.EXT has an 8-character name (FILENAME) followed by a three-
character extension (.EXT). You could also refer to it as filename.txt since case doesn' t matter, but I'm going
to use names in capitals for emphasis throughout this document.
Files are stored in directories; a directory is actually just a special type of file which holds a list of the files
within it. Since a directory is a file, you can have directories within directories. Directory names also follow
the same naming rules as other files, but although they can have an extension they aren' t normally given one
(just an 8-character name).
The system keeps track of your current directory, and if you just refer to a file using a name like
FILENAME.EXT it' s assumed you mean a file of that name in the current directory. You can specify a
pathname to identify a file which includes the directory name as well; the directory is separated from the
rest of the name by a backslash ("\"). For example, a file called LETTER1.TXT in a directory called
LETTERS can be referred to as LETTERS\LETTER1.TXT (assuming that the current directory contains the
LETTERS directory as a subdirectory). If LETTERS contains a subdirectory called PERSONAL, which in
turn contains a file called DEARJOHN.TXT, you would refer to this file as
LETTERS\PERSONAL\DEARJOHN.TXT (i.e. look in the LETTERS directory for
PERSONAL\DEARJOHN.TXT, which in turn involves looking in the PERSONAL subdirectory for the file
DEARJOHN.TXT).
Every disk has a root directory which is the main directory that everything else is part of. The root directory
is called "\", so you can use absolute pathnames which don' t depend on what your current directory is. A
name like \LETTERS\LETTER1.TXT always refers to the same file regardless of which directory you
happen to be working in at the time; the "\" at the beginning means "start looking in the root directory", so
\LETTERS\LETTER1.TXT means "look in the root directory of the disk for a subdirectory called
LETTERS, then look in this subdirectory for a file called LETTER1.TXT". Leaving out the "\" at the
beginning makes this a relative pathname whose meaning is relative to the current directory at the time.
If you want to refer to a file on another disk, you can put a letter identifying the disk at the beginning of the
name separated from the rest of the name by a colon (":"). For example, A:\LETTER1.TXT refers to a file
called LETTER1.TXT in the root directory of drive A. DOS keeps track of the current directory on each disk
separately, so a relative pathname like A:LETTER1.TXT refers to a file called LETTER1.TXT in the
currently-selected directory on drive A.
For convenience, all directories (except root directories) contain two special names: "." refers to the
directory itself, and ".." refers to the parent directory (i.e. the directory that contains this one). For example,
if the current directory is \LETTERS\PERSONAL, the name ".." refers to the directory \LETTERS,
"..\BUSINESS" refers to \LETTERS\BUSINESS, and "..\.." refers to the root directory "\".

Simple commands
If you start up a computer running DOS (or select the "MS-DOS Prompt" icon in Windows), after a bit of
initialisation you will end up with a prompt to let you know that the command interpreter is waiting for you
to type in a command. The prompt might look like this:
C>
This prompt consists of the name of the current disk that you' re using (A for the main floppy disk drive, B
for the secondary floppy disk, C for the primary hard disk drive, and so on) followed by ">". You can get a
list of the files in the current directory on that disk by typing the command DIR (short for "directory"):
C>DIR
(The text in bold above indicates what you type in.) When you press the ENTER key, a list of files will be
displayed on the screen, together with their sizes and the dates when they were last modified. If you want to
see a list of the files on disk in the main floppy drive (drive A), you can do it by first selecting A as the
current drive:
C>A:
Just type the letter of the drive you want to select followed by a colon, and then press ENTER. The prompt
will then appear like this:
A>
and you can then type DIR as before:
A>DIR
Alternatively, you can specify the directory you want to list as a parameter separated by one or more spaces
from the command name:
C>DIR A:
which lists the contents of the current directory on drive A, or
C>DIR A:\
which lists the root directory of drive A, or
C>DIR ..
which lists the directory above the one you' re currently in. Note that for convenience you can arrange
matters so the prompt tells you the current directory as well as the currently-selected disk drive, so that it
might appear like this:
C:\WINDOWS>
meaning that C:\WINDOWS is the current directory. Now imagine that you type "DIR ..":
C:\WINDOWS>DIR ..
You should be able to see that this will list the contents of C:\, i.e. the root directory on drive C. (From now
on I won' t show the prompt in my examples, only what you need to type in.)
The trouble with a command like DIR is that the list of files scrolls off the screen if there are more than a
few files in the directory. Most commands have a set of options that you can use to modify their behaviour,
which are specified after the command name and separated from it by a slash "/"; the DIR command has a /P
(pause) option which makes it pause after each full screen of output, and waits for you to press a key before
displaying the next screenful. Most commands support a "help" option called "/?"; typing DIR/? will give
you a brief list of the options that DIR recognises, for instance.

Some common commands
Here' s a list of some common commands and what they do:
X: -- select X: as the current drive
DIR -- list the current directory
DIR directory -- list the specified directory
CLS -- clear the screen
CD directory -- change current directory to directory
TYPE file -- display specified file on the screen
COPY file1 file2 -- copy the first file to the second
COPY file directory -- copy the file to the specified directory
DEL file -- delete the specified file
EDIT file -- edit the specified file
REN file newname -- rename file to newname
FORMAT drive -- format the disk in the specified drive
MD directory -- make a new directory called directory
RD directory -- remove the specified directory
(which must be empty)
In the list above, file (or file1 or file2) is the pathname of a file, directory is the pathname of a directory and
drive is the name of a disk drive (e.g. A: or C:). In the case of the REN (rename) command, newname is the
new file name (not a path name because this just renames the file, it can' t move the file to a new directory).

Wildcards
In some situations it' s useful to be able to specify sets of files to be operated on by a single command. For
example, you might want to delete all files with the extension ".TXT". You can do this like so:
DEL *.TXT
The star "*" matches any name at all. You can read it as "anything .TXT", although if I had to read this
command out loud I' d probably say "del star-dot-text". Anyway, what it means is that any file whose name
matches the pattern *.TXT (any file with a .TXT extension) will be deleted. You can do more sophisticated
things:
COPY NOTE*.TXT NOTES
will copy any file whose name begins with NOTE (i.e. NOTE followed by anything) and has a .TXT
extension to the NOTES subdirectory. The star is referred to as a wildcard by analogy with the role of a
Joker in some card games, where the Joker can be used as any other card. The command
DEL *.*
will delete all the files in the current directory (all files with any name and any extension). Since this is
obviously risky, you' ll be prompted with a message that says "Are you sure (Y/N)?". If you are sure, type Y
for "yes"; if not type N for "no". A non-obvious use of this technique is allowed by the REN (rename)
command:
REN *.TXT *.LST
will rename all files with a .TXT extension to files with the same name but a .LST extension instead. A
limitation of this technique is that you can only use a star as a wildcard at the end of the name or extension
part of a filename; you might think that you could delete all files whose name ends with X like this:
DEL *X.*
Unfortunately this has the same effect as
DEL *.*
since the star matches everything up to the end of the name, and the X after that is just ignored. You can use
a question mark to match a single character:
DEL ?X.*
This will match any file whose name consists of any single character followed by an X with any extension.
To delete any files whose name ends with X would actually require a whole sequence of commands:
DEL X.*
DEL ?X.*
DEL ??X.*
DEL ???X.*
DEL ????X.*
DEL ?????X.*
DEL ??????X.*
DEL ???????X.*
This deletes all files whose name is X, then all files whose name is any one character followed by X, then all
files whose name is any two characters followed by X, and so on up to any seven characters followed by X.

Redirecting input and output
One nice thing about using a command prompt rather than a GUI is that it' s much simpler to capture the
output of a program in a file, or take the input for a program from somewhere other than the keyboard. All
programs have a number of standard input/output channels, of which the most important are the standard
input (normally the keyboard) and the standard output (normally the screen). There is also a standard
error output which is also associated with the screen (and is intended for displaying error messages).
A few programs write their output directly to the screen (or elsewhere) but in many cases they will write
results to the standard output. If you don' t do anything unusual, this will be the screen. However, you can
associate the standard output with a file instead of the screen so that when the program writes anything to the
standard input, it'
ll end in the file you've specified rather than being displayed on the screen. All you have to
do is specify "> filename" on the command line and the standard output will be routed to the file filename
instead of the screen. For example,
DIR
displays a directory listing on the screen, but
DIR >DIRLIST.TXT
puts the directory listing in a file called DIRLIST.TXT, and doesn' t display anything on the screen. (Read
the ">" out loud as "to"; "dir to dirlist-dot-text".) If the file already exists, doing this will overwrite the
existing contents. If you don' t want this to happen, use ">>" instead of ">". This will append the program' s
output to the existing contents of the file rather than destroying it:
DIR >>DIRLIST.TXT
This has the same effect as before, except that if the file DIRLIST.TXT already exists, the program' s output
will be tacked on to the end of it rather than replacing it. The standard input can be redirected using "<":
MYPROG <TESTDATA.TXT
This will run the program MYPROG, but instead of reading its input from the keyboard it will read it from
the file TESTDATA.TXT. This means that you can set up a file of test data for a program you' re writing so
that you can try the same test data over and over again while you' re debugging without having to type it all
in every time.
Redirecting the standard error output is more difficult; COMMAND.COM doesn' t give you any way to do it,
but you can either get an add-on utility to do it or use a more powerful shell like 4DOS that does allow this
as a standard feature.

Pipes
Sometimes you want the output from one command to be processed by another command. For example, if
you use the DIR command to display a long directory listing, it'
ll just scroll off the screen. Suppose you
didn'
t know that the DIR command has a /P option; what could you do? There is a useful command called
MORE which displays what it reads from the standard input onto the screen a screenful at a time, and then
waits for you to press a key before continuing. So you could redirect the output of DIR into a temporary file
and then use MORE to display it a screen at a time, like this:
DIR >DIRLIST.TMP
MORE <DIRLIST.TMP
DEL DIRLIST.TMP
This happens often enough that there' s a standard way to do it; you just separate the DIR and MORE
commands with a "pipe" symbol ("|"), and the output of the first command is automatically routed to the
input of the second command, so the above three lines can be written as a single line like this:
DIR | MORE
Another example is a cunning way to avoid having to type in Y if you say "DEL *.*"; there is a standard
command called ECHO which just echoes its parameters to the standard output (so ECHO HELLO WORLD
displays the message HELLO WORLD on the screen), and you can use ECHO to give the DEL command
the Y it needs:
ECHO Y | DEL *.*
You' ll still see the "Are you sure (Y/N)?" prompt, but the answer is read from the output of the ECHO
command rather than from the keyboard (and you' ll see how to overcome this in the next section).
Pipes (and all the other redirection facilities) are borrowed from the Unix world, where they' ve been
commonplace for nearly twenty years now. Unix has accumulated a vast collection of filter programs, whose
only purpose is to process the output from one command before passing it on to the next one. A classic
example involves using a handful of simple programs connected in a continuous pipeline to process a text
file of any size and list the ten commonest words together with the number of occurrences of each. This
involves converting the input to lowercase (to ignore case differences), replacing all non-alphabetic
characters with line breaks (so you end up with one word per line), sorting the result to give a list of words in
alphabetical order, removing adjacent identical lines (so you end up with a list of unique words and the
number of occurrences of each), sorting the list by number of occurrences and then displaying the first ten
lines of the result. This CD contains many useful filters taken from the Unix world, and it' s amazing what
you can do with them when you' ve had a bit of practice!

Using device names as files
As I mentioned earlier, DOS provides names for most I/O devices and you can use these names in place of
filenames. Here are some of the device names available:
PRN: -- the current printer
LPT1: -- the first parallel port (normally the printer)
COM1: -- the first serial port
COM2: -- the second serial port
NUL: -- the "null device"
For example, if you want to print a file you can just use one of the following commands:
COPY FILE PRN:
TYPE FILE >PRN:
The first of these copies the file FILE to the printer; the second one displays it to the standard output but
redirects the standard output to the printer.
The null device NUL: is for situations where you want an empty input file or you want program output to be
ignored. If you read from it, it looks like a file containing zero bytes of data; if you write to it, everything
you write is cheerfully ignored and is just discarded. This is useful for throwing away unwanted output:
COPY RESULT*.TXT PRN: >NUL:
ECHO Y | DEL *.* >NUL:
The first example copies all files whose names begin with RESULT to the printer, and throws away all the
messages that the COPY command normally displays on the screen ("1 file copied" or whatever). The
second is a completely silent way of deleting all files in the current directory; the "Are you sure (Y/N)?"
prompt is thrown away, and the answer to this unasked question is then read from the output produced by the
ECHO command.

Batch files
Sometimes you will want to execute the same sequence of commands over and over again. This is easy to
do; just create a file containing those commands (one command per line) and give it a name with a .BAT
extension (e.g. DO_IT.BAT). A file like this is known as a batch file. Once you' ve done this you can just
type DO_IT at the command prompt, and (assuming that DO_IT.BAT is the first file that the command
interpreter finds when it' s looking for the command, as described above) the command interpreter will read
this file and execute each line as a separate command. For example, assume that DO_IT.BAT contained the
following lines:
CLS
ECHO HELLO, WORLD
Typing DO_IT would clear the screen and the display the message "HELLO, WORLD". By default, each
line of the file is displayed on the screen ("echoed") before it' s executed. This is useful for debugging but it
ruins the effect in the batch file above. You can disable the echoing by prefixing the command with "@":
@CLS
@ECHO HELLO, WORLD
Another method is to disable echoing at the start of the batch file with the command "ECHO OFF" (which
needs to be prefixed by "@", otherwise it will be echoed to the screen):
@ECHO OFF
CLS
ECHO HELLO, WORLD
Batch files can be quite sophisticated. For example, you can get at the command line parameters by referring
to them as %1, %2, %3 and so on up to %9. The following batch file displays the second command line
parameter followed by the first one:
@ECHO OFF
ECHO %2 %1
If this is saved as DO_IT.BAT, typing
DO_IT HELLO WORLD
will display the message
WORLD HELLO
There are also commands which give you the basics of a simple programming language: IF statements to
choose between alternative courses of action, FOR loops to repeat a sequence of actions, tests to determine if
a command executed successfully, and so on.
One nasty trap to watch out for is when you try to execute a batch file from inside another batch file.
Imagine you' ve got this in DO_IT.BAT:
DO_OTHER.BAT
ECHO Finished!
The ECHO command on the second line won' t be executed; if you refer to one batch file inside the other, the
effect is a GOTO rather than a CALL (i.e. you go to the inner batch file but you don' t remember where you
came from, so at the end of the inner batch file you end up back at the command prompt). The solution is to
use the internal CALL command instead:
CALL DO_OTHER.BAT
ECHO Finished!
This will work as expected; it' ll execute DO_OTHER.BAT and then display the message "Finished!".

The AUTOEXEC.BAT file
The file C:\AUTOEXEC.BAT is a special one, because when the system is first started the command
interpreter will automatically execute this file as a batch file (hence the name). This is where to put all your
system startup commands; for example, setting the path, loading your keyboard and mouse driver, running a
virus checker, or whatever. Whenever you install any new DOS software, you' ll need to check whether you
need to update this file (and if you do, you' ll either need to restart the system or type C:\AUTOEXEC.BAT
to re-execute it before any of the changes will take effect). You' ll often need to update your path to include
the directory where your new software was installed. Some software will do this automatically, and
amazingly often it will get it wrong or do it in such a way that something else no longer works. What I
always do is to have a separate file (C:\BOOT\AUTOEXEC.BAT) where all the "real" work is done, and
then I have a one-line C:\AUTOEXEC.BAT file which says this:
@C:\BOOT\AUTOEXEC.BAT
Remember, C:\BOOT\AUTOEXEC.BAT will never go back to C:\AUTOEXEC.BAT when it' s finished (I
deliberately didn't use a CALL command). This means that anything that an auto-installation utility adds to
the end of C:\AUTOEXEC.BAT will be ignored, and I can inspect it and decide whether it' s made a sensible
change or not. If so, I can then update my real AUTOEXEC.BAT file to include the changes.

Data processing

Data processing is any computer process that converts data into information or knowledge
1. Modes of data processing
a) Batch processing: it requires the data to be initially grouped before it is processed serially,
and the result obtained periodically. batch processing
System for processing data with little or no operator intervention. Batches of data are prepared in advance
to be processed during regular ‘runs’
b) On line – processing: the results of data processing transaction is available immediately. The
use of computers to undertake tasks where transactions are initiated and data entered from
terminals located in the users'offices. Common examples are the booking of airline tickets,
holidays, hotels, and car hire, and transactions in building societies and some banks.
c) Real time processing: a data processing system in which the time interval required to process
and respond to inputs is so small that the response itself is useful for controlling the physical
activity of a process.
d) Distributed data processing : it involves a computer system linked by a communication
network where processing is performed by separate computers. Computers are dispersed
throughout organization. Allows greater flexibility in structure

2. Basic function of data processing
a) Origination: it involves the nature, type, and origin of the source documents must be determined.
b) Data capture: data must be recorded in some form and important data should be collected and
processed upon the organization and the system.
c) Sorting: arranged in the sequential manner
d) Merging: allows multiple files to be put together in sequence, provided the files are already.
e) Calculating: The arithmetic manipulation of data to create meaningful results.
f) Summarizing: reducing masses of data to a more concise and usable form.
g) Output: the delivery or communication of the information ar results must be carried out by
reporting, issuance of document, retrieval, analysis, communicating and reproducing.
h) Storage: the results of the processing of data must be retained for future use of references.
3. Data Hierarchy
a) Bit: value 0 and 1
b) Byte: basic unit of information
c) Field: one or more bytes that contain data attributed of an entity.
d) Record: collection of file.
e) Database: contains list of files.

Software Development Life Cycle

Once upon a time, software development consisted of a programmer writing code to solve a problem
or automate a procedure. Nowadays, systems are so big and complex that teams of architects,
analysts, programmers, testers and users must work together to create the millions of lines of custom-
written code that drive our enterprises.
To manage this, a number of system development life cycle (SDLC) models have been created:
waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and
stabilize.
The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of
each stage becomes the input for the next. These stages can be characterized and divided up in
different ways, including the following:
• Project planning, feasibility study: Establishes a high-level view of the intended project and
determines its goals.
• Systems analysis, requirements definition: Refines project goals into defined functions and
operation of the intended application. Analyzes end-user information needs.
• Systems design: Describes desired features and operations in detail, including screen layouts,
business rules, process diagrams, pseudocode and other documentation.
• Implementation: The real code is written here.
• Integration and testing: Brings all the pieces together into a special testing environment, then
checks for errors, bugs and interoperability.
• Acceptance, installation, deployment: The final stage of initial development, where the software is
put into production and runs actual business.
• Maintenance: What happens during the rest of the software' s life: changes, correction, additions,
moves to a different computing platform and more. This, the least glamorous and perhaps most
important step of all, goes on seemingly forever.

Introduction to Flowcharts and Algorithms

A flowchart is a graphical representation of an algorithm. These flowcharts play a vital role in the
programming of a problem and are quite helpful in understanding the logic of complicated and lengthy
problems. Once the flowchart is drawn, it becomes easy to write the program in any high level language.
Often we see how flowcharts are helpful in explaining the program to others. Hence, it is correct to say that a
flowchart is a must for the better documentation of a complex program.
Flowcharts are usually drawn using some standard symbols; however,

Start or end of the program

Computational steps or processing function of a program

Input or output operation

Decision making and branching

Connector or joining of two parts of program

The following are some guidelines in flowcharting:
a. In drawing a proper flowchart, all necessary requirements should be listed out in logical order.
b. The flowchart should be clear, neat and easy to follow. There should not be any room for ambiguity
in understanding the flowchart.
c. The usual direction of the flow of a procedure or system is from left to right or top to bottom.
d. Only one flow line should come out from a process symbol.

or
e. Only one flow line should enter a decision symbol, but two or three flow lines, one for each possible
answer, should leave the decision symbol.

f. Only one flow line is used in conjunction with terminal symbol.

h. If the flowchart becomes complex, it is better to use connector symbols to reduce the number of flow
lines. Avoid the intersection of flow lines if you want to make it more effective and better way of
communication.
i. Ensure that the flowchart has a logical start and finish.
j. It is useful to test the validity of the flowchart by passing through it with a simple test data.
PART II: Example of a flowchart:
Problem 1: Write an algorithm and draw the flowchart for finding the average of two numbers
Algorithm:
Input: two numbers x and y START
Output: the average of x and y
Steps:
1. input x Input x
2. input y
3. sum = x + y
4. average = sum /2
Input y
5. output average

Sum = x + y

Average = sum/2

Output
Average

END

Programming concept.

When I first time learn program, I thought that programming is equal to coding! I only need to sit in front of
the computer and start typing programming code. Is that what you are thinking? Actually in a programming
project, we spend most of our time to design the program, instead of coding. You may wonder why? The
answer is simple, because NO ONE can just sit in front of the computer and write a program of any
complexity!! However, I will talk about "Software Engineering" later when you get start. In this section, I
will just give you some basic idea of programming methodology.
Top-Down programming design method
During the 1970s and 80s, the primary software engineering methodology is called structured programming
approach. When software engineer design a program, they will try to break down the problem into smaller
pieces and work on each smaller piece separately, continues this process until every smaller pieces of
problem are easy to work with without further decomposition. We call this designing method ' Top-Down'
programming method. This approach has its value and popular approach at that time. However, it has its
own weakness. For example, when one piece of code is develop in previous project, that piece of source
code is depend on the upper level of code and unique to each project due to the top-down approach. So every
time you need to redevelop some code you already develop. In other other, the cost of producing a high-
quality program will become very high.
Bottom-up programming design method
The bottom-up design approach, you can tell by the name, is start the design process at the bottom, it is
possible when you already has some idea of the problem, or you already have source code which can fit into
the project This Bottom-up approach usually use together with Top-down approach to solve problems.
Object-oriented programming (OOPs)
You may heard a lot about Object-oriented programming in the past several years. Everybody is talking
about OOPs. You may feel the power of OOPs, even you don' t know any programming. Anyway, let' s talk
about OOPs.

Data communication

• Data communication is the transfer of data from one device to another via some form of transmission
medium.
• A data communications system must transmit data to the correct destination in an accurate and timely
manner.
• The five components that make up a data communications system are the message, sender, receiver,
medium, and protocol.
• Text, numbers, images, audio, and video are different forms of information.
• Data flow between two devices can occur in one of three ways: simplex, half-duplex, or full-duplex.
• A network is a set of communication devices connected by media links.
• In a point-to-point connection, two and only two devices are connected by a dedicated link. In a
multipoint connection, three or more devices share a link.
• Topology refers to the physical or logical arrangement of a network. Devices may be arranged in a
mesh, star, bus, or ring topology.
• A network can be categorized as a local area network (LAN), a metropolitan-area network (MAN),
or a wide area network (WAN).
• A LAN is a data communication system within a building, plant, or campus, or between nearby
buildings.
• A MAN is a data communication system covering an area the size of a town or city.
• A WAN is a data communication system spanning states, countries, or the whole world.
• An internet is a network of networks.
• The Internet is a collection of many separate networks.
• TCP/IP is the protocol suite for the Internet.
• There are local, regional, national, and international Internet service providers (ISPs).
• A protocol is a set of rules that governs data communication; the key elements of a protocol are
syntax, semantics, and timing.
• Standards are necessary to ensure that products from different manufacturers can work together as
expected.
• The ISO, ITU-T, ANSI, IEEE, and EIA are some of the organizations involved in standards creation.
• Forums are special-interest groups that quickly evaluate and standardize new technologies.
• A Request for Comment (RFC) is an idea or concept that is a precursor to an Internet standard.

Network Topology
Designing a Network Topology
The term topology, or more specifically, network topology, refers to the arrangement or physical layout of
computers, cables, and other components on the network. "Topology" is the standard term that most network
professionals use when they refer to the network' s basic design. In addition to the term "topology," you will
find several other terms that are used to define a network' s design:
• Physical layout
• Design
• Diagram
• Map
A network' s topology affects its capabilities. The choice of one topology over another will have an impact on
the:
• Type of equipment the network needs.
• Capabilities of the equipment.
• Growth of the network.
• Way the network is managed.
Developing a sense of how to use the different topologies is a key to understanding the capabilities of the
different types of networks.
Before computers can share resources or perform other communication tasks they must be connected. Most
networks use cable to connect one computer to another.
Standard Topologies
All network designs stem from four basic topologies:
• Bus
• Star
• Ring
• Mesh
A bus topology consists of devices connected to a common, shared cable. Connecting computers to cable
segments that branch out from a single point, or hub, is referred to as setting up a star topology. Connecting
computers to a cable that forms a loop is referred to as setting up a ring topology. A mesh topology connects
all computers in a network to each other with separate cables.
These four topologies can be combined in a variety of more complex hybrid topologies.
Bus
The bus topology is often referred to as a "linear bus" because the computers are connected in a straight line.
This is the simplest and most common method of networking computers. Figure 1.15 shows a typical bus
topology. It consists of a single cable called a trunk (also called a backbone or segment) that connects all of
the computers in the network in a single line.

Figure 1.15 Bus topology network

Run the c01dem01 video located in the Demos folder on the compact disc accompanying this book to view
a demonstration of a bus-topology connection.
Communication on the Bus
Computers on a bus topology network communicate by addressing data to a particular computer and sending
out that data on the cable as electronic signals. To understand how computers communicate on a bus, you
need to be familiar with three concepts:
• Sending the signal
• Signal bounce
• Terminator
Sending the Signal Network data in the form of electronic signals is sent to all the computers on the
network. Only the computer whose address matches the address encoded in the original signal accepts the
information. All other computers reject the data. Figure 1.16 shows a message being sent from
0020af151d8b to 02608c133456. Only one computer at a time can send messages.

Run the c01dem02 video located in the Demos folder on the compact disc accompanying this book to view
a demonstration of how data is transferred in a bus topology.

Figure 1.16 Data is sent to all computers, but only the destination computer accepts it
Because only one computer at a time can send data on a bus network, the number of computers attached to
the bus will affect network performance. The more computers there are on a bus, the more computers will be
waiting to put data on the bus and, consequently, the slower the network will be.
There is no standard way to measure the impact of a given number of computers on the speed of any given
network. The effect on performance is not related solely to the number of computers. The following is a list
of factors that— in addition to the number of networked computers— will affect the performance of a
network:
• Hardware capabilities of computers on the network
• Total number of queued commands waiting to be executed
• Types of applications (client-server or file system sharing, for example) being run on the network
• Types of cable used on the network
• Distances between computers on the network
Computers on a bus either transmit data to other computers on the network or listen for data from other
computers on the network. They are not responsible for moving data from one computer to the next.
Consequently, if one computer fails, it does not affect the rest of the network.

Star
In the star topology, cable segments from each computer are connected to a centralized component called a
hub. Figure 1.21 shows four computers and a hub connected in a star topology. Signals are transmitted from
the sending computer through the hub to all computers on the network. This topology originated in the early
days of computing when computers were connected to a centralized mainframe computer.
Figure 1.21 Simple star network

The star network offers the advantage of centralized resources and management. However, because each
computer is connected to a central point, this topology requires a great deal of cable in a large network
installation. Also, if the central point fails, the entire network goes down.
If one computer— or the cable that connects it to the hub— fails on a star network, only the failed computer
will not be able to send or receive network data. The rest of the network continues to function normally.

Run the c01dem11 video located in the Demos folder on the CD accompanying this book to view a
demonstration of what happens when a computer on a star topology network goes down.
Ring
The ring topology connects computers on a single circle of cable. Unlike the bus topology, there are no
terminated ends. The signals travel around the loop in one direction and pass through each computer, which
can act as a repeater to boost the signal and send it on to the next computer. Figure 1.22 shows a typical ring
topology with one server and four workstations. The failure of one computer can have an impact on the
entire network.
NOTE

A network's physical topology is the wire itself. A network'
s logical topology is the way it carries signals on
the wire.
Figure 1.22 Simple ring network showing logical ring

Run the c01dem12 and c01dem13 videos located in the Demos folder on the CD accompanying this book to
view demonstrations of logical and actual flows of data on a ring-topology network.

Variations on the Standard Topologies
Many working topologies are hybrid combinations of the bus, star, ring, and mesh topologies.
Star Bus
The star bus is a combination of the bus and star topologies. In a star-bus topology, several star topology
networks are linked together with linear bus trunks. Figure 1.28 shows a typical star-bus topology.
If one computer goes down, it will not affect the rest of the network. The other computers can continue to
communicate. If a hub goes down, all computers on that hub are unable to communicate. If a hub is linked to
other hubs, those connections will be broken as well.
Figure 1.28 Star-bus network

Run the c01dem018, c01dem19, and c01dem20 videos located in the Demos folder on the CD
accompanying this book to view demonstrations of what happens when computers and hubs in a star-bus
topology go down.
Star Ring
The star ring (sometimes called a star-wired ring) appears similar to the star bus. Both the star ring and the
star bus are centered in a hub that contains the actual ring or bus. Figure 1.29 shows a star-ring network.
Linear-bus trunks connect the hubs in a star bus, while the hubs in a star ring are connected in a star pattern
by the main hub.

Figure 1.29 Star-ring network
Peer-to-Peer
Many small offices use a peer-to-peer network as described earlier in this chapter in Lesson 2: Network
Configuration. Such a network can be configured as either a physical star or a bus topology. However,
because all computers on the network are equal (each can be both client and server), the logical topology
looks somewhat different. Figure 1.30 shows the logical topology of a peer-to-peer network.

Figure 1.30 Logical peer-to-peer topology
Selecting a Topology
There are many factors to consider when deciding which topology best suits the needs of an organization.
Table 1.2 provides some guidelines for selecting a topology.
Table 1.2 Topology Advantages and Disadvantages
Topology Advantages Disadvantages
Bus Use of cable is economical. Network can slow down in heavy
traffic.
Media is inexpensive and easy to work with. Problems are difficult to isolate.
System is simple and reliable.
Bus is easy to extend. Cable break can affect many users.
Ring System provides equal access for all computers. Failure of one computer can impact
Performance is even despite many users. the rest of the network.
Problems are hard to isolate.
Network reconfiguration disrupts
operation.
Star Modifying system and adding new computers is If the centralized point fails, the
easy. network fails.
Centralized monitoring and management are
possible.
Failure of one computer does not affect the rest of
the network.
Mesh System provides increased redundancy and System is expensive to install because
reliability as well as ease of troubleshooting. it uses a lot of cabling.

Bus Topology
A few situations will cause a bus network' s termination to fail and thereby take the network down. Possible
scenarios include the following:
• A cable on the network breaks, causing each end of the cable on either side of the break to lose its
termination. Signals will bounce, and this will take the network down.
• A cable becomes loose or is disconnected, thereby separating the computer from the network. It will
also create an end that is not terminated, which in turn will cause signals to bounce and the network
to go down.
• A terminator becomes loose; thereby creating an end that is not terminated. Signals will start to
bounce and the network will go down.
Ring Topology
A ring network is usually very reliable, but problems can occur. Possible scenarios include the following:
• One of the cables in the ring breaks, causing the network to stop functioning temporarily. In token-
ring networks, restoring the cable will immediately restore the network.
• One of the cables in the ring becomes disconnected, causing the network to temporarily stop
functioning. In token-ring networks, restoring the cable will immediately restore the network.
What is the Open System Interconnect (OSI) model?
The Open Systems Interconnection (OSI) reference model was an architectural model for open networking
systems that was developed by the International Organization for Standardization (ISO) in Europe in 1974.
The OSI reference model was intended as a basis for developing universally accepted networking protocols.
So, the OSI reference model is an idealized model of the logical connections that must occur in order for
network communication to take place. Most protocol suites, such as TCP/IP, DECnet, and Systems Network
Architecture (SNA), map loosely to the OSI reference model. The OSI model is not a protocol but it is good
for understanding how various protocols within a protocol suite function and interact.
The OSI reference model has seven logical layers, as shown in the following table.
The OSI Reference Model
7 Application layer
Interfaces user applications with network functionality, controls how applications access the network, and
generates error messages. Protocols at this level include HTTP, FTP, SMTP, and NFS.

6 Presentation layer
Translates data to be transmitted by applications into a format suitable for transport over the network.
Redirector software, such as the Workstation service for Microsoft Windows NT, is located at this level.
Network shells are also defined at this layer.

5 Session layer
Defines how connections can be established, maintained, and terminated. Also performs name resolution
functions. This layer enables applications running at two workstations to coordinate their communications
into a single session.

4 Transport layer
Sequences packets so that they can be reassembled at the destination in the proper order. Generates
acknowledgments and retransmits packets. Assembles packets after they are received. If a duplicate packet
arrives, the transport layer recognizes it as a duplicate and discards it. TCP and SPX are transport-layer
protocols.

3 Network layer
Defines logical host addresses, creates packet headers, and routes packets across an internetwork using
routers and Layer 3 switches. Strips the headers from the packets at the receiving end. This layer is
responsible for the entire route of a packet, from source to destination. IP and IPX are examples of network-
layer protocols.

2 Data-link layer
Specifies how data bits are grouped into frames, and specifies frame formats. Responsible for error
correction, flow control, hardware addressing (such as MAC addresses), and how devices such as hubs,
bridges, repeaters, and Layer 2 switches operate. The Project 802 specifications divide this layer into two
sublayers, the logical link control (LLC) layer and the media access control (MAC) layer. The MAC layer
deals with network access and network control. The LLC layer, operating just above the MAC layer, is
concerned with sending and receiving the user massages.

1 Physical layer
Defines network transmission media, signaling methods, bit synchronization, architecture (such as Ethernet
or Token Ring), and cabling topologies. Defines how network interface cards (NICs) interact with the media
(cabling). You can think this layer as the hardware layer.

A local area network (LAN) is a computer network covering a small geographic area, like a home, office,
or group of buildings
Wide Area Network (WAN) is a computer network that covers a broad area
Asynchronous Transfer Mode (ATM) is a cell relay, packet switching network and data link layer protocol
which encodes data traffic into small (53 bytes; 48 bytes of data and 5 bytes of header information) fixed-
sized cells. ATM provides data link layer services that run over Layer 1 links
Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The
name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for
the physical layer, through means of network access at the Media Access Control (MAC)/Data Link Layer,
and a common addressing format.
ARCNET (also CamelCased as ARCnet, an acronym from Attached Resource Computer NETwork) is a
local area network (LAN) protocol, similar in purpose to Ethernet or Token Ring. ARCNET was the first
widely available networking system for microcomputers and became popular in the 1980s for office
automation tasks. It has since gained a following in the embedded systems market, where certain features of
the protocol are especially useful.
Stations on a token ring LAN are logically organized in a ring topology with data being transmitted
sequentially from one ring station to the next with a control token circulating around the ring controlling
access. Physically, a token ring network is wired as a star, with ' hubs'and arms out to each station and the
loop going out-and-back through each.
Network switches are capable of inspecting data packets as they are received, determining the source and
destination device of that packet, and forwarding it appropriately. By delivering each message only to the
connected device it was intended for, a network switch conserves network bandwidth and offers generally
better performance than a hub.
A fast Ethernet adapter can be logically divided into a media access controller (MAC) which deals with the
higher level issues of medium availability and a physical layer interface (PHY)
Fiber distributed data interface (FDDI) provides a standard for data transmission in a local area network
that can extend in range up to 200 kilometers (124 miles).