Professional Documents
Culture Documents
UG - B.com - Computer Applications - 123 13 - Fundamentals of Information Technology
UG - B.com - Computer Applications - 123 13 - Fundamentals of Information Technology
FUNDAMENTALS
OF
INFORMATION TECHNOLOGY
Reviewer
Dr. K. Mahesh Professor,
Department of Computer Applications,
Alagappa University, Karaikudi
Authors:
Vinay Ahlawat, Lecturer, KIET, Ghaziabad
Units: (1.0-1.3, 1.5-1.9, 2.0-2.3)
Sanjay Saxena, Managing Director, Total Synergy Consulting Pvt.Ltd.
Units: (1.4, 2.4, 14.0-14.4)
B Basavaraj, Former Principal and HOD, Department of Electronics and Communication Engineering, SJR College of Science,
Arts & Commerce
Units: (3, 4, 7, 8)
Deepti Mehrotra, Professor, Amity School of Engineering and Technology, Amity University, Noida
Unit: (5)
Vivek Kesari, Asst. Professor, Galgotia’s GIMT Institute of Management & Technology, Greater Noida
Unit: (6)
VK Govindan, Professor, Computer Engineering, Deptt. of Computer Science and Engineering, NIT, Calicut
Units: (9, 12.0-12.6)
Rajneesh Agrawal, Senior Scientists, Department of Information Technology, Government of India
Units: (10, 13.0-13.3)
Rohit Khurana, CEO, ITL Education Solutions Ltd.
Units: (11, 12.7-12.13)
Deepak Gupta, Assistant Professor, G.L. Bajaj Group of Institutions, Greater Noida
Units: (14.6-14.13)
Vikas Publishing House
Units: (2.5-2.10, 13.4-13.10, 14.5)
All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.
Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.
Work Order No. AU/DDE/DE1-238/Preparation and Printing of Course Materials/2018 Dated 30.08.2018 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Fundamentals of Information Technology
1.0 INTRODUCTION
In this unit, you will learn about the basics of computer, input/output devices and
computer memory. A computer is an electronic device that is used to carry out
sequences of arithmetic or logical operations automatically. Input device is a
hardware component that is used to enter data in the computer while output device
provides the result after arithmetic and logical operations. Memory is used for
storing data and information temporarily or permanently.
1.1 OBJECTIVES
Self-Instructional
Material 1
Computers A computer performs four functions:
1. Accepts data
2. Processes data
NOTES 3. Produces output
4. Stores results
In the following units you will learn about the parts of a computer and each
of the four parts of the information processing cycle.
Modern computers are both electronic and digital. The instructions
and data are called software; whereas wires, transistors and circuits together form
the hardware.
Generally, all computers have the following hardware components:
Memory: Enables a computer to store data and programs
Mass storage devices: Allow a computer to keep large amounts of data.
Disk drives and tape drives are the common mass storage devices.
Input device: The input device is the medium through which data and
instructions enter a computer, usually a keyboard and a mouse.
Output device: A display screen, printer or some other device that lets
you see what the computer has achieved.
Central Processing Unit (CPU): The heart of the computer that actually
executes instructions and processes data.
In addition to these, many other components make it possible for the main
components to work together efficiently. For example, the bus helps every computer
to transmit data from one part of the computer to another.
Computers can mainly be classified by power and size. These are as follows:
Personal computer: A small and single-user computer based on
a microprocessor is popularly known as Personal Computer (PC). Besides
the microprocessor, a personal computer has a keyboard (helps to enter
data), a monitor (helps to display information) and also a storage device to
store data.
Workstation: It is also a single-user and a powerful computer. A work-
station is like a personal computer which consists of a higher-quality monitor
and a more powerful microprocessor.
Minicomputer: A multi-user computer that can support ten to hundreds
of users at the same time.
Mainframe: A powerful multi-user computer which can support hundreds
and thousands of users simultaneously.
Supercomputer: An extremely fast computer capable of implementing
millions of instructions every second.
Self-Instructional
2 Material
Some Basic Computer Terminologies Computers
If any computer can carry out a large number of calculations in a short period of
time, it can be called a ‘powerful’ computer. Ignoring all the limiting factors of the
other hardware, according to the above definition, the CPU makes a computer
powerful. Computers are powerful for different reasons. They operate with amazing
reliability, speed, and accuracy. Huge amounts of data and information can be
stored in computers. Computers also permit their users to communicate with other
users.
Speed
A computer can perform billions of actions in a second. The important factors that
determine the speed of a computer are:
The amount of data that the CPU can execute in a given period of time
CPU’s clock speed
Clock speed
Clock speed is the rate at which a CPU implements instructions. An internal clock
that regulates the rate at which instructions are executed and coordinates all the
various computer components exists in every system. The CPU needs a
predetermined number of clock ticks to execute each instruction it receives.
If this clock runs faster, the CPU will execute more instructions per second.
Clock speeds are expressed in megahertz MHz (Mega means million and hertz
means times per second) or gigahertz (GHz). For example, 200 MHz means 200
million times per second whereas 200 GHz means 200 billion times per second.
Reliability
‘Failures are usually due to human error, one way or another.’ We can depend on
the modern computer because the electronic components in these have a very low
failure rate. The high reliability of the components of a computer ensures that the
computer consistently gives good results.
Self-Instructional
Material 3
Computers Accuracy
Computers process large amounts of data and produce error-free results if
(and only if) the input data is correct and the instructions it receives are input
NOTES properly. If the input data is inaccurate, the resulting output will certainly be incorrect.
So, the accuracy of a computer’s output depends mainly on the accuracy of the
data input.
Storage
A computer can store huge amounts of data. Using the availability of the modern
storage devices, the computer can transfer data quickly from storage to memory,
process the data, and then reserve the data again for further use.
Diligence
Computers are highly consistent compared to human beings. They never suffer
from human traits like boredom, tiredness and lack of concentration. Therefore,
computers score over human beings in performing repetitive and voluminous tasks.
Versatility
Computers are capable of performing any task following a series of logical steps.
They are versatile machines, but their capability is limited only by human intelligence.
In today’s fast developing, technology-savvy world, it is almost impossible to find
an area where computers are not being used. Banks, railway/air reservation, hotels,
weather forecasting and many more—computers are essential in every sector.
Power of Remembering
In human memory, information that is less important is relegated to the back of the
mind and forgotten with the progression of the time; whereas a piece of information
once stored (or recorded) in the computer can never be forgotten and can be
retrieved at any time! Therefore, information can be retained as long as desired.
We can use secondary storage (a type of detachable memory) for this purpose.
No Intelligence Quotient (IQ)
Computers have no real intellect or common sense unlike the human brain.
Computers are still not complex enough to understand and analyze and act
accordingly like the brain does. These can only follow rules and instructions preset
by the programmer. Traditional machines manages to replicate and bear a
resemblance to human intellect because programmers try to make the computer
react just like a human brain by programming a set of rules and instruction for the
computer to follow. Like the human brain, the computer can hardly innovate and
invent new ideas. The computer is faster than the human brain at doing logical
things and computations. Only by using preset algorithms (or programs) to merge
current ideas, computer can do creative innovation (which only shows a limited
Self-Instructional
4 Material
range of creativity). However, the brain is capable of imagination. It is better at Computers
As seen in earlier sections, the size, shape, cost, and performance of computers
have changed over the years, but the basic logical structure has not changed. Any
computer system essentially consists of three important parts, namely, input device,
central processing unit (CPU) and output device. The CPU itself consists of the
main memory, the arithmetic logic unit, and the control unit.
In addition to the five basic parts mentioned above, computers also employ
secondary storage devices (also referred to as auxiliary storage or backing storage),
which are used for storing data and instructions on a long-term basis.
Figure 1.1 shows the basic anatomy of computer system.
Secondary Storage
Main Memory
All computer systems perform the following five basic operations for converting
raw data into relevant information:
Self-Instructional
Material 5
Computers 1. Inputting: The process of entering data and instructions into the computer
system.
2. Storing: The process of saving data and instructions so that they are available
for use as and when required.
NOTES
3. Processing: Performing arithmetic or logical operations on data, to convert
them into useful information. Arithmetic operations include operations of
add, subtract, multiply, divide etc., and logical operations are operations of
comparison like equal to, less than, greater than, etc.
4. Outputting: This is the process of providing the results to the user. These
could be in the form of visual display and/or printed reports.
5. Controlling: Refers to directing the sequence and manner in which all the
above operations are performed.
Let us now familiarise ourselves with the various computer units that perform
these functions:
Input Unit
Both program and data need to be in the computer system before any kind of
operation can be performed. Program refers to the set of instructions which the
computer is to carry out, and data is the information on which these instructions
are to operate. For example, if the task is to rearrange a list of telephone subscribers
in alphabetical order, the sequence of instructions that guide the computer through
this operation is the program, whilst the list of names to be sorted is the data.
The Input Unit performs the process of transferring data and instructions
from the external environment into the computer system. Instructions and data
enter the input unit depending upon the particular input device used (keyboard,
scanner, card reader etc.). Regardless of the form in which the input unit receives
data, it converts these instructions and data into computer acceptable form (Binary
Codes). It then supplies the converted data and instructions to the computer system
for further processing.
Main Memory (Primary Storage)
Data and instructions are stored in the primary storage before processing and are
transferred as and when needed to the Arithmetic Logic Unit (ALU) where the
actual processing takes place. Once the processing is completed, the final results
are again stored in the primary storage till they are released to an output device.
Also, any intermediate results generated by the ALU are temporarily transferred
back to the primary storage until needed at a later time. Thus data and instructions
may move many times back and forth between the primary storage and the ALU
before the processing is completed.
It may be worth remembering that no processing is done in the primary
storage.
Self-Instructional
6 Material
Arithmetic Logic Unit (ALU) Computers
After the input unit transfers the information into the memory unit the information
can then be further transferred to the ALU where comparisons or calculations are
done and results sent back to the memory unit. NOTES
Since all data and instructions are represented in numeric form (bit patterns),
ALUs are designed to perform the four basic arithmetic operations: add, subtract,
multiply, divide, and logic/comparison operations such as equal to, less than, greater
than.
Output Unit
Since computers work with binary code, the results produced are also in binary
form. The basic function of the output unit therefore is to convert these results into
human readable form before providing the output through various output devices
like terminals, printers etc.
Control Unit
It is the function of the control unit to ensure that according to the stored instructions,
the right operation is done on the right data at the right time. It is the control unit
that obtains instructions from the program stored in the main memory, interprets
them, and ensures that other units of the system execute them in the desired order.
In effect, the control unit is comparable to the central nervous system in the human
body.
Central Processing Unit
The control unit, Arithmetic Logic Unit along with the main memory are together
known as the Central Processing Unit (CPU). It is the brain of any computer
system.
Secondary Storage
The storage capacity of the primary memory of the computer is limited. Often, it is
necessary to store large amounts of data. So, additional memory called secondary
storage or auxiliary memory is used in most computer systems.
Secondary storage is storage other than the primary storage. These are
peripheral devices connected to and controlled by the computer to enable
permanent storage of user data and programs. Typically, hardware devices like
magnetic tapes and magnetic disks fall under this category.
1.6 SUMMARY
task.
Self-Instructional
Material 9
Computer Generations and
Classification
UNIT 2 COMPUTER
GENERATIONS AND
NOTES
CLASSIFICATION
Structure
2.0 Introduction
2.1 Objectives
2.2 Evolution of Computers
2.3 Generations of Computer
2.4 Classification of Computers
2.5 Distributed Computer Systems
2.6 Answers to Check Your Progress Questions
2.7 Summary
2.8 Key Words
2.9 Self Assessment Questions and Exercises
2.10 Further Readings
2.0 INTRODUCTION
In this unit, you will learn about the evolution and various types of computers.
computer generation terminology is change in technology a computer. It is used to
distinguish between varying hardware technologies. Computers can be classified
on the basis of their size, processing speed and cost. It can be classified in Digital,
Analog and Hybrid. You will also learn about the distributed computer systems.
2.1 OBJECTIVES
The history of computers began almost 2000 years ago with the advent of the
abacus. An abacus is a wooden rack which holds two horizontal wires with beads
strung on them. Numbers are represented using the respective position of beads
on the rack. Simple calculations can be carried out by appropriately placing the
beads.
Self-Instructional
10 Material
Computer Generations and
Classification
NOTES
Calculating machines
These machines were mainly designed to execute all basic four operations: addition,
subtraction, multiplication and division (see Figure 2.3). They are large machines
and need skillful operation. They are also very expensive and are usually owned
by companies.
Self-Instructional
Material 11
Computer Generations and
Classification
NOTES
Napier’s Bones
A clever multiplication tool, Napier’s bones, is another interesting invention which
was invented in 1617 by a mathematician John Napier (1550–1617) of Scotland.
Napier, a clergyman and philosopher, played an important role in the history of
computing. He was a gifted mathematician and published his great work of
Logarithms in 1614 (not long before his death) in his book called Rabdologia.
This was a brilliant invention since it enabled the simplification of a very complicated
task. People could now transform multiplication and division into simple addition
and subtraction. His Logarithm tables were used by many people and it soon
became very popular. Napier is also remembered for his important invention of
the ‘Napier’s Bones’ which is a small instrument consisting of ten rods. The
multiplication table is engraved on the Napier’s Bones. This simple device is capable
of carrying out quickly multiplication if one of the numbers was of one digit only
(Example: 6 × 6742). Napier’s bones was very successful, but people of Europe
used it only until the mid 1960’s (see Figure 2.4).
Slide Rule
In 1620, a calculating device called ‘Slide Rule’ based on the principle of logarithms
was invented by an English mathematician, William Oughtred. This slide rule became
Self-Instructional
12 Material
the first analog computer of the modern ages. In 1850, a French artillery officer Computer Generations and
Classification
named Amedee Mannheim improvised upon it by adding the movable double
sided cursor. The two inbuilt graduated scales were placed in a way that the suitable
alignment of one scale against the other made it possible to do addition and
multiplication just by inspection. NOTES
The Pascaline was built on a brass rectangular box (see Figure 2.5). A set
of jagged dials moved internal wheels in such a way that a complete rotation of a
wheel caused the wheel on the left to advance one tenth. The first prototype
consisted of only five wheels, but later units consisted of six and eight wheels.
A pin was used in order to rotate the dials. Compared to Schickard’s machine, the
wheels were designed only to add numbers and moved clockwise only. Subtraction
was performed by applying a tedious technique which was based on the addition
of the nine’s complement. Though the machine did not get wide acceptance as it
was expensive and unreliable as well as difficult to manufacture and use, it did
attract plenty of attention. Several people built calculating machines based on the
same design during a period of thirty years after Pascal invented his machine.
Self-Instructional
Material 13
Computer Generations and Among them the most significant was the adding machine invented by Sir Samuel
Classification
Morland (1625–1695) of England in 1666. It had a duodecimal scale based on
the English currency, and human intervention was required to enter the carry
displayed in an auxiliary dial.
NOTES
It is interesting to note that many companies introduced models based on
Pascal’s design even at the beginning of the 20th century. For example, the Lightning
Portable Adder introduced in 1908 by the Lightning Adding Machine Co. of Los
Angeles, and the Addometer introduced by the Reliable Typewriter and Adding
Machine Co. of Chicago in 1920. However, none of these machines achieved
commercial success.
Leibniz’s Multiplication and Division Machine
In 1672, the famous German polymath, mathematician and philosopher, Gottfried
Wilhelm Von Leibniz (1646–1716), who was also the co-inventor of differential
calculus, planned to build a machine to implement the four basic arithmetical
operations. He was inspired by a steps-counting device known as a pedometer.
Leibniz was a child prodigy and he had earned his second doctorate when he
was only nineteen years old. He had gone through Pascal’s design and he improvised
on the design in order to perform multiplication and division. He finalized his design
by 1674. Leibniz called his machine Stepped Reckoner. It used a special type of
gear named Leibniz Wheel or Stepped Drum. It consisted of a cylinder with nine
bar-shaped teeth of incrementing length and placed parallel to the cylinder’s axis.
When the drum was rotated with the help of a crank, a regular ten-tooth wheel
which was fixed over a sliding axis was rotated between the zero to nine positions.
This rotation depended on its relative position on the drum. Like the Pascal device,
there was one set of wheels for each digit which allowed the user to slide the mobile
axis. As a result, when the drum was rotated it generated a movement proportional
to its relative position in the regular wheels. This movement was then transformed by
Leibniz Wheel into multiplication or division, but this movement depended on which
direction the stepped drum was rotated.
Babbage’s Analytical Engine
Notebook/Laptop Computers
Notebook computers are battery operated personal computers. Smaller than the
size of a briefcase, these are portable computers and can be used in places like
libraries, in meetings or even while travelling. Popularly known as laptop computers,
or simply laptops, they weigh less than 2.5 kg and can be only 3 inches thick (refer
Figure 2.8). Notebook computers are usually more expensive as compared to
desktop computers though they have almost the same functions, but since they are
sleeker and portable, they have a complex design and are more difficult to
manufacture. These computers have large storage space and other peripherals,
such as serial port, PC card, modem or network interface card, CD-ROM drive
Self-Instructional
20 Material
and printer. They can also be connected to a network to download data from Computer Generations and
Classification
other computers or to the Internet. A notebook computer has a keyboard, a flat
screen with Liquid Crystal Display (LCD) display and can also have a trackball
and a pointing stick.
NOTES
2.7 SUMMARY
Self-Instructional
24 Material
Computer Generations and
2.8 KEY WORDS Classification
Self-Instructional
Material 25
Number Systems and
Boolean Algebra
UNIT 3 NUMBER SYSTEMS AND
BOOLEAN ALGEBRA
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 Number Systems
3.2.1 Decimal Number System
3.2.2 Binary Number System
3.2.3 Octal Number System
3.2.4 Hexadecimal Number System
3.2.5 Conversion from One Number System to the Other
3.3 Complements
3.4 Numeric and Character Codes
3.5 Basic Gates
3.5.1 AND Gate
3.5.2 OR Gate
3.6 Boolean Algebra
3.6.1 Laws and Rules of Boolean Algebra
3.7 Answers to Check Your Progress Questions
3.8 Summary
3.9 Key Words
3.10 Self Assessment Questions and Exercises
3.11 Further Readings
3.0 INTRODUCTION
In this unit, you will learn about number systems and binary codes. In mathematics,
a 'number system' is a set of numbers together with one or more operations, such
as addition or multiplication. The number systems are represented as natural
numbers, integers, rational numbers, algebraic numbers, real numbers, complex
numbers, etc. A number symbol is called a numeral. A numeral system or system
of numeration is a writing system for expressing numbers. For example, the standard
decimal representation of whole numbers gives every whole number a unique
representation as a finite sequence of digits. You will learn about the binary numeral
system or base-2 number system that represents numeric values using two symbols,
0 and 1. This base-2 system is specifically a positional notation with a radix of 2.
It is implemented in digital electronic circuitry using logic gates and the binary
system used by all modern computers. Since binary is a base-2 system, hence
each digit represents an increasing power of 2 with the rightmost digit representing
20, the next representing 21, then 22, and so on. To determine the decimal
representation of a binary number simply take the sum of the products of the
Self-Instructional
26 Material
binary digits and the powers of 2 which they represent. You will also learn about Number Systems and
Boolean Algebra
basic gates and Boolean algebra.
A number is an idea that is used to refer amount of things. People use number
words, number gestures and number symbols. Number words are said out loud.
Number gestures are made with some part of the body, usually the hands. Number
symbols are marked or written down. A number symbol is called a numeral. The
number is the idea we think of when we see the numeral, or when we see or hear
the word.
On hearing the word number, we immediately think of the familiar decimal
number system with its 10 digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These numerals
are called Arabic numerals. Our present number system provides modern
mathematicians and scientists with great advantages over those of previous
civilizations and is an important factor in our advancement. Since fingers are the
most convenient tools nature has provided, human beings use them in counting.
So, the decimal number system followed naturally from this usage.
A number of base, or radix r, is a system that uses distinct symbols of r
digits. Numbers are represented by a string of digit symbols. To determine the
quantity that the number represents, it is necessary to multiply each digit by an
integer power of r and then form the sum of all the weighted digits. It is possible to
use any whole number greater than one as a base in building a numeration system.
The number of digits used is always equal to the base.
There are four systems of arithmetic which are often used in digital systems.
These systems are as follows:
1. Decimal
2. Binary
3. Octal
Self-Instructional
Material 27
Number Systems and 4. Hexadecimal
Boolean Algebra
In any number system, there is an ordered set of symbols known as digits.
Collection of these digits makes a number which in general has two parts, integer
and fractional, set apart by a radix point (.). Hence, a number system can be
NOTES
represented as,
n 3 ... a1a0 a
an 1an 2a 1a2
a
3 ... a–
Nb̂ = m
Integer Portion Fractional Portion
where, N = A number.
b = Radix or base of the number system.
n = Number of digits in integer portion.
m = Number of digits in fractional portion.
an – 1 = Most Significant Digit (MSD).
a– m = Least Significant Digit (LSD).
and 0 (ai or a–f ) b–1
Base or Radix: The base or radix of a number is defined as the number of
different digits which can occur in each position in the number system.
The decimal number system has a base or radix of 10. Each of the ten
decimal digits 0 through 9, has a place value or weight depending on its position.
The weights are units, tens, hundreds, and so on. The same can be written as the
power of its base as 100, 101, 102, 103,..., etc. Thus, the number 1993 represents
quantity equal to 1000 + 900 + 90 + 3. Actually, this should be written as {1 ×
103 + 9 × 102 + 9 × 101 + 3 × 100}. Hence, 1993 is the sum of all digits
multiplied by their weights. Each position has a value 10 times greater than the
position to its right.
Self-Instructional
28 Material
For example, the number 379 actually stands for the following representation. Number Systems and
Boolean Algebra
100 10 1
2 1
10 10 100
3 7 9 NOTES
3 × 100 + 7 × 10 + 9 × 1
37910 = 3 × 100 + 7 × 10 + 9 × 1
In this example, 9 is the Least Significant Digit (LSD) and 3 is the Most
Significant Digit (MSD).
Example 3.1: Write the number 1936.469 using decimal representation.
Solution: 1936.46910 = 1 × 103 + 9 × 102 + 3 × 101 + 6 × 100 + 4 × 10–1
+ 6 × 10–2 + 9 × 10–3
= 1000 + 900 + 30 + 6 + 0.4 + 0.06 + 0.009
It is seen that powers are numbered to the left of the decimal point starting
with 0 and to the right of the decimal point starting with –1.
The general rule for representing numbers in the decimal system by using
positional notation is as follows:
anan – 1 ... a2a1a0 = an10n + an – 110n–1 + ... a2102 + a1101 + a0100
Where n is the number of digits to the left of the decimal point.
Self-Instructional
Material 29
Number Systems and The numeral 102 (one, zero, base two) stands for two, the base of the
Boolean Algebra
system.
In binary counting, single digits are used for none and one. Two digit
numbers are used for 102 and 112 [2 and 3 in decimal numerals]. For the next
NOTES
counting number, 1002 (4 in decimal numerals) three digits are necessary. After
1112 (7 in decimal numerals) four-digit numerals are used until 11112 (15 in
decimal numerals) is reached, and so on. In a binary numeral, every position
has a value 2 times the value of the position to its right.
A binary number with 4 bits, is called a nibble and a binary number with 8
bits is known as a byte.
For example, the number 10112 actually stands for the following
representation:
10112 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20
=1 × 8 + 0 × 4 +1 × 2 + 1 ×1
10112 = 8 + 0 + 2 + 1 = 1110
In general,
[bnbn – 1 ... b2, b1, b0]2 = bn2n + bn – 12n–1 + ... + b222 + b121 + b020
Similarly, the binary number 10101.011 can be written as follows:
1 0 1 0 1 . 0 1 1
24 23 22 21 20 . 2– 1 2– 2 2– 3
(MSD) (LSD)
10101.0112 = 1 × 24 + 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20
+ 0 × 2–1 + 1 × 2–2 + 1 × 2–3
= 16 + 0 + 4 + 0 + 1 + 0 + 0.25 + 0.125 = 21.37510
In each binary digit, the value increases in powers of two starting with 0 to
the left of the binary point and decreases to the right of the binary point starting
with power –1.
0 000 000 0
1 000 001 1
2 000 010 2
3 000 011 3
4 000 100 4
5 000 101 5
6 000 110 6
7 000 111 7
8 001 000 10
9 001 001 11
10 001 010 12
11 001 011 13
12 001 100 14
13 001 101 15
14 001 110 16
15 001 111 17
16 010 000 20
Self-Instructional
Material 31
Number Systems and 3.2.4 Hexadecimal Number System
Boolean Algebra
The hexadecimal system groups numbers by sixteen and powers of sixteen.
Hexadecimal numbers are used extensively in microprocessor work. Most
NOTES minicomputers and microcomputers have their memories organized into sets of
bytes, each consisting of eight binary digits. Each byte either is used as a single
entity to represent a single alphanumeric character or broken into two 4-bit pieces.
When the bytes are handled in two 4-bit pieces, the programmer is given the
option of declaring each 4-bit character as a piece of a binary number or as two
BCD numbers.
The hexadecimal number is formed from a binary number by grouping bits
in groups of 4 bits each, starting at the binary point. This is a logical way of grouping,
since computer words come in 8 bits, 16 bits, 32 bits, and so on. In a group of 4
bits, the decimal numbers 0 to 15 can be represented as shown in Table 3.2.
The hexadecimal number system has a base of 16. Thus, it has 16 distinct
digit symbols. It uses the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 plus the letters A, B,
C, D, E and F as 16 digit symbols. The relationship among octal, hexadecimal and
binary is shown in Table 3.2. Each hexadecimal number represents a group of
four binary digits.
Table 3.2 Equivalent Numbers in Decimal, Binary, Octal and Hexadecimal Number Systems
Self-Instructional
32 Material
Counting in Hexadecimal Number Systems and
Boolean Algebra
When counting in hex, each digit can be incremented from 0 to F. Once it reaches
F, the next count causes it to recycle to 0 and the next higher digit is incremented.
This is illustrated in the following counting sequences: 0038, 0039, 003A, 003B,
NOTES
003C, 003D, 003E, 003F, 0040; 06B8, 06B9, 06BA, 06BB, 06BC, 06BD,
06BE, 06BF, 06C0, 06C1.
3.2.5 Conversion from One Number System to the Other
Binary to Decimal Conversion
A binary number can be converted into decimal number by multiplying the binary
1 or 0 by the weight corresponding to its position and adding all the values.
Example 3.2: Convert the binary number 110111 to decimal number.
Solution: 1101112 = 1 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1
= 32 + 16 + 0 + 4 + 2 + 1
= 5510
We can streamline binary to decimal conversion by the following procedure:
Step 1: Write the binary, i.e., all its bits in a row.
Step 2: Write 1, 2, 4, 8, 16, 32, ..., directly under the binary number working
from right to left.
Step 3: Omit the decimal weight which lies under zero bits.
Step 4: Add the remaining weights to obtain the decimal equivalent.
The same method is used for binary fractional number.
Example 3.3: Convert the binary number 11101.1011 into its decimal
equivalent.
Solution:
Step 1: 1 1 1 0 1 . 1 0 1 1
Binary Point
Step 2: 16 8 4 2 1 . 0.5 0.25 0.125 0.0625
Step 3: 16 8 4 0 1 . 0.5 0 0.125 0.0625
Step 4: 16 + 8 + 4 + 1 + 0.5 + 0.125 + 0.0625 = [29.6875]10
Hence, [11101.1011]2 = [29.6875]10
Table 3.3 lists the binary numbers from 0000 to 10000. Table 3.4 lists
powers of 2 and their decimal equivalents and the number of K. The abbreviation
K stands for 210 = 1024. Therefore, 1K = 1024, 2K = 2048, 3K = 3072, 4K =
Self-Instructional
Material 33
Number Systems and 4096, and so on. Many personal computers have 64K memory this means that
Boolean Algebra
computers can store up to 65,536 bytes in the memory section.
Table 3.3 Binary Numbers Table 3.4 Powers of 2
NOTES Decimal Binary Powers of 2 Equivalent Abbreviation
0
0 0 2 1
1
1 01 2 2
2 10 22 4
3
3 11 2 8
4
4 100 2 16
5 101 25 32
6
6 110 2 64
7 111 27 128
8 1000 28 256
9
9 1001 2 512
10 1010 210 1024 1K
11
11 1011 2 2048 2K
12
12 1100 2 4096 4K
13 1101 213 8192 8K
14
14 1110 2 16384 16K
15 1111 215 32768 32K
16
16 10000 2 65536 64K
Self-Instructional
34 Material
Similarly, [25.375]10 = 16 + 8 + 1 + 0.25 + 0.125 Number Systems and
Boolean Algebra
= 24 + 23 + 0 + 0 + 20 + 0 + 2–2 + 2–3
[25.375]10 = [11011.011]2
This is a laborious method for converting numbers. It is convenient for small numbers NOTES
and can be performed mentally, but is less used for larger numbers.
Double Dabble Method
A popular method known as double dabble method, also known as divide-by-
two method, is used to convert a large decimal number into its binary equivalent.
In this method, the decimal number is repeatedly divided by 2 and the remainder
after each division is used to indicate the coefficient of the binary number to be
formed. Notice that the binary number derived is written from the bottom up.
Example 3.5: Convert 19910 into its binary equivalent.
Solution: 199 2 = 99 + remainder 1 (LSB)
99 2 = 49 + remainder 1
49 2 = 24 + remainder 1
24 2 = 12 + remainder 0
12 2 = 6 + remainder 0
62 = 3 + remainder 0
32 = 1 + remainder 1
12 = 0 + remainder 1 (MSB)
The binary representation of 199 is, therefore, 11000111. Checking the
result we have,
[11000111]2 = 1 × 27 + 1 × 26 + 0 × 25 + 0 × 24 + 0 × 23 + 1 × 22 + 1 × 21
+ 1 × 20
= 128 + 64 + 0 + 0 + 0 + 4 + 2 + 1
[11000111]2 = [199]10
Notice that the first remainder is the LSB and last remainder is the MSB.
This method will not work for mixed numbers.
Decimal Fraction to Binary
The conversion of decimal fraction to binary fractions may be accomplished by
using several techniques. Again, the most obvious method is to subtract the highest
value of the negative power of 2, which may be subtracted from the decimal
fraction. Then, the next highest value of the negative power of 2 is subtracted from
the remainder of the first subtraction and this process is continued until there is no
remainder or to the desired precision.
Self-Instructional
Material 35
Number Systems and Example 3.6: Convert decimal 0.875 to a binary number.
Boolean Algebra
Solution: 0.875 – 1 × 2–1 = 0.875 – 0.5 = 0.375
0.375 – 1 × 2–2 = 0.375 – 0.25 = 0.125
NOTES 0.125 – 1 × 2–3 = 0.125 – 0.125 = 0
[0.875]10 = [0.111]2
A much simpler method of converting longer decimal fractions to binary
consists of repeatedly multiplying by 2 and recording any carriers in the integer
position.
Example 3.7: Convert 0.694010 to a binary number.
Solution: 0.6940 × 2 = 1.3880 = 0.3880 with a carry of 1
0.3880 × 2 = 0.7760 = 0.7760 with a carry of 0
0.7760 × 2 = 1.5520 = 0.5520 with a carry of 1
0.5520 × 2 = 1.1040 = 0.1040 with a carry of 1
0.1040 × 2 = 0.2080 = 0.2080 with a carry of 0
0.2080 × 2 = 0.4160 = 0.4160 with a carry of 0
0.4160 × 2 = 0.8320 = 0.8320 with a carry of 0
0.8320 × 2 = 1.6640 = 0.6640 with a carry of 1
0.6640 × 2 = 1.3280 = 0.3280 with a carry of 1
We may stop here as the answer would be approximate.
[0.6940]10 = [0.101100011]2
If more accuracy is needed, continue multiplying by 2 until you have as
many digits as necessary for your application.
Example 3.8: Convert 14.62510 to binary number.
Solution: First the integer part 14 is converted into binary and then, the fractional
part 0.625 is converted into binary as shown below:
Integer part Fractional part
14 2 =7 + 0 0.625 × 2 = 1.250 with a carry of 1
72 =3 + 1 0.250 × 2 = 0.500 with a carry of 0
3 2 =1 + 1 0.500 × 2 = 1.000 with a carry of 1
1 2 =0 + 1
The binary equivalent is [1110.101]2
Octal to Decimal Conversion
An octal number can be easily converted to its decimal equivalent by multiplying
each octal digit by its positional weight.
Self-Instructional
36 Material
Example 3.9: Convert (376)8 to decimal number. Number Systems and
Boolean Algebra
Solution: The process is similar to binary to decimal conversion except that the
base here is 8.
[376]8 = 3 × 82 + 7 × 81 + 6 × 80 NOTES
= 3 × 64 + 7 × 8 + 6 × 1 = 192 + 56 + 6 = [254]10
The fractional part can be converted into decimal by multiplying it by the
negative powers of 8.
Example 3.10: Convert (0.4051)8 to decimal number.
Solution: [0.4051]8 = 4 × 8–1 + 0 × 8–2 + 5 × 8–3 + 1 × 8– 4
1 1 1 1
= 4 0 5 1
8 64 512 4096
[0.4051]8 = [0.5100098]10
Example 3.11: Convert (6327.45)8 to its decimal number.
Solution: [6327.45]8 = 6 × 83 + 3 × 82 + 2 × 81 + 7 × 80 + 4 × 8–1 + 5 × 8–2
= 3072 + 192 + 16 + 7 + 0.5 + 0.078125
[6327.45]8 = [3287.578125]10
Decimal to Octal Conversion
The methods used for converting a decimal number to its octal equivalent are the
same as those used to convert from decimal to binary. To convert a decimal number
to octal, we progressively divide the decimal number by 8, writing down the
remainders after each division. This process is continued until zero is obtained as
the quotient, the first remainder being the LSD.
The fractional part is multiplied by 8 to get a carry and a fraction. The new
fraction obtained is again multiplied by 8 to get a new carry and a new fraction.
This process is continued until the number of digits have sufficient accuracy.
Example 3.12: Convert [416.12]10 to octal number.
Solution: Integer part 416 8 = 52 + remainder 0 (LSD)
52 8 = 6 + remainder 4
6 8 = 0 + remainder 6 (MSD)
Fractional part 0.12 × 8 = 0.96 = 0.96 with a carry of 0
0.96 × 8 = 7.68 = 0.68 with a carry of 7
0.68 × 8 = 5.44 = 0.44 with a carry of 5
0.44 × 8 = 3.52 = 0.52 with a carry of 3
0.52 × 8 = 4.16 = 0.16 with a carry of 4
0.16 × 8 = 1.28 = 0.28 with a carry of 1
0.28 × 8 = 2.24 = 0.24 with a carry of 2
0.24 × 8 = 1.92 = 0.92 with a carry of 1
[416.12]10 = [640.07534121]8
Self-Instructional
Material 37
Number Systems and Example 3.13: Convert [3964.63]10 to octal number.
Boolean Algebra
Solution: Integer part 3964 8 = 495 with a remainder of 4 (LSD)
Self-Instructional
42 Material
A typical microcomputer can store up to 65,535 bytes. The decimal Number Systems and
Boolean Algebra
addresses of these bytes are from 0 to 65,535. The equivalent binary addresses
are from
0000 0000 0000 0000 to 1111 1111 1111 1111
NOTES
The first 8 bits are called the upper byte and second 8 bits are called lower
byte.
When the decimal is greater than 255, we have to use both the upper byte
and the lower byte.
Hexadecimal to Octal Conversion
This can be accomplished by first writing down the 4-bit binary equivalent of
hexadecimal digit and then partitioning it into groups of 3 bits each. Finally, the 3-
bit octal equivalent is written down.
Example 3.30: Convert [2AB.9]16 to octal number.
Solution: Hexadecimal number 2 A B . 9
4 bit numbers 0010 1010 1011 . 1001
3 bit pattern 001 010 101 011 . 100 100
Octal number 1 2 5 3 . 4 4
[2AB.9]16 = [1253.44]8
Example 3.31: Convert [3FC.82]16 to octal number.
Solution: Hexadecimal number 3 F C . 8 2
4 bit binary numbers 0011 1111 1100 . 1000 0010
3 bit pattern 001 111 111 100 . 100 000 100
Octal number 1 7 7 4 . 4 0 4
[3FC.82]16 = [1774.404]8
Notice that zeros are added to the rightmost bit in the above two examples
to make them group of 3 bits.
Octal to Hexadecimal Conversion
It is the reverse of the above procedure. First the 3-bit equivalent of the octal digit
is written down and partitioned into groups of 4 bits, then the hexadecimal equivalent
of that group is written down.
Self-Instructional
Material 43
Number Systems and Example 3.32: Convert [16.2]8 to hexadecimal number.
Boolean Algebra
Solution: Octal number 1 6 . 2
NOTES 3 bit binary 001 110 . 010
4 bit pattern 1110 . 0100
Hexadecimal E . 4
[16.2]8 = [E.4]16
Example 3.33: Convert [764.352]8 to hexadecimal number.
Solution: Octal number 7 6 4 . 3 5 2
3 bit binary 111 110 100 . 011 101 010
4 bit pattern 0001 1111 0100 . 0111 0101 000
Hexadecimal number 1 F 4 . 7 5 0
[764.352]8 = [1F4.75]16
Binary Fractions
A binary fraction can be represented by a series of 1 and 0 to the right of a binary
point. The weights of digit positions to the right of the binary point are given by
2–1, 2–2, 2–3 and so on.
For example, the binary fraction 0.1011 can be written as,
0.1011 = 1 × 2–1 + 0 × 2–2 + 1 × 2–3 + 1 × 2– 4
= 1 × 0.5 + 0 × 0.25 + 1 × 0.125 + 1 × 0.0625
(0.1011)2 = (0.6875)10
Mixed Numbers
Mixed numbers contain both integer and fractional parts. The weights of mixed
numbers are,
23 22 21 . 2–1 2–2 2–3 etc.
Binary Point
For example, a mixed binary number 1011.101 can be written as,
(1011.101)2 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 + 1 × 2–1 + 0 × 2–2 + 1 × 2–3
= 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1 + 1 × 0.5 + 0 × 0.25 + 1 × 0.125
[1011.101]2 = [11.625]10
Self-Instructional
44 Material
When different number systems are used, it is customary to enclose the Number Systems and
Boolean Algebra
number within big brackets and the subscripts indicate the type of the number
system.
NOTES
3.3 COMPLEMENTS
r 1's
r
r's complement
1's complement
Binary
2's
7's complement
Octal
8's complement
9's complement
Decimal
10's complement
15's complement
Hexadecimal
16's complement
Self-Instructional
Material 45
Number Systems and Example 3.34
Boolean Algebra
1. 1’s complement of binary number (101101)2 is
111111
NOTES 101101
010010 2
01101
010010
Charging1's to 0's and 0's to 1's
2. 2’s complement of binary number (1 0 1 0 0)2 is
01011
1
2's 01100
3. The 1’s complement of (10010110)2 is (01101001)2.
4. The 2’s complement of (10010110)2 is (01101010)2.
5. 2’s complement of binary number (11001.11)2 is
1's 00110.00
1
2's 00110.01
Representing numbers within the computer circuits, registers and the memory unit
by means of Electrical signals or Magnetism is called NUMERIC CODING. In
the computer system, the numbers are stored in the Binary form, since any number
can be represented by the use of 1’s and 0’s only. Numeric codes are divided into
two categories i.e. weighted codes and non-weighted codes. The different type of
the Weighted Codes are:
(i) BCD Code,
(ii) 2-4-2-1 Code,
(iii) 4-2-2-1 Code,
(iv) 5-2-1-1 Code,
(v) 7 – 4 – 2 – 1 Code, and
(vi) 8-4-2-1 Code.
Self-Instructional
46 Material
The Non-Weighted Codes are of two types i.e. Number Systems and
Boolean Algebra
(i) Non-Error Detecting Codes and,
(ii) Error Detecting Codes.
Character codes NOTES
Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
All these codes are discussed in detail in unit 6.
Self-Instructional
Material 47
Number Systems and Table 3.6 2-Input AND Gate Table 3.7 3-Input AND Gate
Boolean Algebra
Inputs Output Inputs Output
A B Y A B C Y
NOTES 0 0 0 0 0 0 0
0 1 0 0 0 1 0
1 0 0 0 1 0 0
1 1 1 0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1
3.5.2 OR Gate
The OR gate is a digital logic gate that implements logical disfunction. A basic
circuit has two or more inputs and a single output and it operates in accordance with
the following definition: The output of an OR gate assumes state 1 if one or more
(all) inputs assume state 1.
From the truth table it can be seen that all switches must be opened (0
state) for the light to be off (output 0 state). This type of circuit is called an OR
gate. Table 3.8 shows the truth table of three input or gates.
Table 3.8 Truth Table of Three-Input OR Gates
Inputs Outputs
A B C Y=A+B+C
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
Table 3.9 is the truth table for two-input OR gate. The OR gate is an ANY
OR ALL gate; an output occurs when any or all of the inputs are high. Table 3.10
Self-Instructional
48 Material
shows binary equivalent details in which A and B are determined for inputs and Y Number Systems and
Boolean Algebra
= A+B is expressed for output.
Table 3.9 Two-Input OR Gate Table 3.10 Binary Equivalent
C
D
(a) (b) (c)
Self-Instructional
Material 49
Number Systems and Basic Operations: Boolean algebra is specifically based on logical
Boolean Algebra
counterparts to numeric operations multiplication xy, addition x + y, and negation –x,
namely conjunction x y (AND), disjunction x y (OR) and complement or negation
¬x (NOT). In digital electronics, the AND is represented as a multiplication, the OR
NOTES is represented as an addition and the NOT is denoted with a post fix prime, for
example A which means NOT A. Conjunction is the closest of these three operations.
As a logical operation the conjunction of two propositions is true when both
propositions are true and false otherwise. Disjunction works almost like addition
with one exception, i.e., the disjunction of 1 and 1 is neither 2 nor 0 but 1. Hence, the
disjunction of two propositions is false when both propositions are false and true
otherwise. The disjunction is also termed as the dual of conjunction. Logical negation,
however, does not work like numerical negation. It corresponds to incrementation,
i.e., ¬x = x+1 mod 2. An operation with this property is termed an involution. Using
negation we can formalize the notion that conjunction is dual to disjunction as per
De Morgan’s laws, ¬(x y) = ¬x ¬y and ¬(x y) = ¬x ¬y. These can also be
construed as definitions of conjunction in terms of disjunction and vice versa: x y
= ¬(¬x ¬y) and x y = ¬(¬x ¬y).
Derived operations: Other Boolean operations can be derived from
these by composition. For example, implication xy of is a binary operation,
which is false when x is true and y is false, and true otherwise. It can also be
expressed as xy = ¬x y or equivalently ¬(x ¬y). In Boolean logic this
operation is termed as material implication, which distinguishes it from related
but non-Boolean logical concepts. The basic concept is that an implication xy
is by default true.
Boolean algebra, however, does have an exact counterpart called
eXclusive-OR (XOR) or parity, represented as x y. The XOR of two
propositions is true only when exactly one of the propositions is true. Further,
the XOR of any value with itself vanishes, for example x x = 0. Its digital
electronics symbol is a hybrid of the disjunction symbol and the equality symbol.
XOR is the only binary Boolean operation that is commutative and whose truth
table has equally many 0s and 1s.
Another example is x|y, the NAND gate in digital electronics, which is false
when both arguments are true and true otherwise. NAND can be defined by
composition of negation with conjunction because x |y = ¬(x y). It does not have
its own schematic symbol and is represented using an AND gate with an inverted
output. Unlike conjunction and disjunction, NAND is a binary operation that can be
used to obtain negation using the notation ¬x = x|x. Using negation one can define
conjunction in terms of NAND through x y = ¬(x|y) from which all other Boolean
operations of nonzero parity can be obtained. NOR, ¬(x y), is termed as the
evident dual of NAND and is equally used for this purpose. This universal character
of NAND and NOR has been widely used for gate arrays and also for integrated
circuits with multiple general-purpose gates.
In logical circuits, a simple adder can be made using an XOR gate to add the
numbers and a series of AND, OR and NOT gates to create the carry output. XOR
Self-Instructional
50 Material
is also used for detecting an overflow in the result of a signed binary arithmetic Number Systems and
Boolean Algebra
operation, which occurs when the leftmost retained bit of the result is not the same
as the infinite number of digits to the left.
3.6.1 Laws and Rules of Boolean Algebra NOTES
Boolean algebra is a system of mathematical logic. Properties of ordinary algebra
are valid for Boolean algebra. In Boolean algebra, every number is either 0 or 1.
There are no negative or fractional numbers. Though many of these laws have
already been discussed they provide the tools necessary for Boolean expressions.
The following are the basic laws of Boolean algebra:
Laws of Complementation
The term complement means to invert, to change 1s to 0s and 0s to 1s. The
following are the laws of complementation:
Law 1 0 1
Law 2 10
Law 3 A A
OR Laws AND Laws
Law 4 0+0=0 Law 12 0.0 = 0
Law 5 0+1=1 Law 13 1.0 = 0
Law 6 1+0=1 Law 14 0.1 = 0
Law 7 1+1=1 Law 15 1.1 = 1
Law 8 A+0=A Law 16 A.0 = 0
Law 9 A+1=1 Law 17 A.1 = A
Law 10 A+A=A Law 18 A.A = 0
Law 11 A A 1 Law 19 A. A 0
Laws of ordinary algebra that are also valid for Boolean algebra are:
Commutative Laws
Law 20 A+B=B+A
Law 21 A.B= B.A
Associative Laws
Law 22 A ( B C ) ( A B) C
Law 23 A.( BC ) ( AB).C
Distributive Laws
Law 24 A.( B C ) A.B A.C
Law 25 A BC ( A B).( A C )
Law 26 A ( A.B) A B
Self-Instructional
Material 51
Number Systems and Example 3.35: Prove A + BC = (A + B) (A + C).
Boolean Algebra
Solution: A + BC = A.1 + BC Law A . 1 =A
= A(1 + B) + BC Law A + 1 =1
= A . 1 + AB + BC Law A(B + C) = AB + AC
NOTES = A . (1 + C) + AB + BC Law 1 + A =1
= A . 1 + AC + AB + BC
= A . A + AC +AB + BC Law A . A = A
= A (A + C) + B (A + C)
A + BC = (A + C) (A + B)
Alternative proof:
( A C ) ( A B) AA AB AC BC
A AB AC BC
A (1 B) AC BC
A .1 AC BC
A (1 C ) BC
A BC
Self-Instructional
52 Material
3. A logic gate is an electronic circuit, which makes logical decisions. The Number Systems and
Boolean Algebra
most common logic gates used are OR, AND, NOT, NAND and NOR
gates.
NOTES
3.9 SUMMARY
Self-Instructional
54 Material
Logical Circuits
4.0 INTRODUCTION
Logic circuits whose outputs at any instance of time are entirely dependent on the
input signals present at that time are known as combinational circuits. A
combinational circuit has no memory characteristic as its output does not depend
upon any past inputs. A combinational logic circuit consists of input variables, logic
gates and output variables. The design of a combinational circuit starts from the
Self-Instructional
Material 55
Logical Circuits verbal outline of the problem and ends in a logic circuit diagram or a set of Boolean
functions from which the logic diagram can be easily obtained.
You will also learn about the registers and counters. A register is a group of
flip-flops suitable for storing binary information. Each flip-flop is a binary cell capable
NOTES
of storing one bit of information. An n-bit register has a group of n flip-flops and is
capable of storing any binary information containing n bits. The register is mainly
used for storing and shifting binary data entered into it from an external source. A
counter, by function, is a sequential circuit consisting of a set of flip-flops connected
in a special manner to count the sequence of the input pulses received in digital
form. Counters are fundamental components of digital system. Digital counters
find wide application like pulse counting, frequency division, time measurement
and control and timing operations.
4.1 OBJECTIVES
The outputs of combinational logic circuits are only determined by their current
input state as they have no feedback, and any changes to the signals being applied
to their inputs will immediately have an effect at the output. In other words, in a
combination logic circuit, the input condition changes state so too does the output
as combinational circuits have no memory. Combination logic circuits are made up
from basic logic AND, OR or NOT gates that are combined or connected together
to produce more complicated switching circuits. As combination logic circuits are
made up from individual logic gates they can also be considered as decision making
circuits and combinational logic is about combining logic gates together to process
two or more signals in order to produce at least one output signal according to the
logical function of each logic gate. Common combinational circuits made up from
individual logic gates include multiplexers, decoders and demultiplexers, full and
half adders etc. One of the most common uses of combination logic is in multiplexer
Self-Instructional
56 Material
and demultiplexer type circuits. Here, multiple inputs or outputs are connected to Logical Circuits
a common signal line and logic gates are used to decode an address to select a
single data input or output switch. A multiplexer consists of two separate
components, a logic decoder and some solid state switches. Figure 4.1 shows the
hierarchy of combinational logic circuit. NOTES
B C CARRY = AB
SUM = A + B
A
= AB + BA
B
CARRY = AB
Inputs Outputs
Addend Augend Sum Carry
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
First entry : Inputs : A = 0 and B = 0
Human reponse : 0 plus 0 is 0 with a carry of 0.
Half-adder response : SUM = 0 and CARRY = 0
Second entry: Inputs : A = 1 and B = 0
Human response : 1 plus 0 is 1 with a carry of 0.
Self-Instructional
58 Material
Half-adder response : SUM = 1 and CARRY = 0 Logical Circuits
C(carry) = AB = ( A B)( A B )( A B)
The implementation of the half-adder circuit using basic gates is shown in Figure
4.3.
A B
AB
S = AB + AB
AB
C = AB
4.3.1 Full-Adder
A half-adder has only two inputs and there is no provision to add a carry coming
from the lower order bits when multi-bit addition is performed. For this purpose,
a third input terminal is added and this circuit is used to add A, B and Cin.
A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a SUM and a CARRY.
It consists of three inputs and two outputs. Two input variables denoted by
A and B, represent the carry from the previous lower significant position. Two
outputs are necessary because the arithmetic sum of three binary digits ranges
from 0 to 3, and binary 2 or 3 needs two digits. The outputs are designed by the
symbol S (for SUM) and Cout (for CARRY). The binary variable S gives the value
of the LSB (least significant bit) of the SUM. The binary variable Cout gives the
output CARRY.
Self-Instructional
Material 59
Logical Circuits
NOTES
(a) Logic Symbol of Full-Adder
Half-adder Half-adder
A
B Cin Sum
Cout
= ( AB AB ) Cin Cin ( AB AB )
= AB AB C
in Cin ( AB AB )
= ( A B ) . ( A B ) Cin Cin ( AB AB )
SUM = A B Cin = ABCin ABCin ABCin ABCin
For A = 1, B = 0 and Cin = 1,
= 1 . 0 .1 1 . 0 . 1 1. 0 . 1 1. 0 .1
=0 . 1 . 1+0 . 0 . 0+1 . 1 . 0+1 . 0 . 1=0
The sum of products for
Cout = ABCin ABCin ABCin ABCin ABCin ABCin
= ABCin ABCin ABCin ABCin ABCin ABCin
= BCin [ A A] ACin [ B B] AB[Cin Cin ]
Cout = BCin + ACin + AB = AB + BCin + CinA
For A = 1, B = 0 and Cin = 1, Cout = 1.0 + 0.1 + 1.1 = 1 Self-Instructional
Material 61
Logical Circuits 4.3.2 Half-Subtractor
A combinational circuit which is used to perform subtraction of two binary bits is
known as a half-subtractor.
NOTES The logic symbol of a half-subtractor is shown in Figure 4.5(a). It has two
inputs, A (minuend) and B (subtrahend) and two outputs D (difference) and C
(borrow out). It is made up of an XOR gate, a NOT gate and an AND gate.
[Figure 4.5(b)]. Subtraction of two binary numbers may be accomplished by taking
the complement of the subtrahend and adding it to the minuend; that is, the
subtraction becomes an addition operation. The truth table for half-subtraction is
given in Table 4.3. From the truth table, it is clear that the difference output is 0 if
A = B and 1 if A ± B; the borrow output C is 1 whenever A < B. If A is less than
B, then subtraction is done by borrowing, from the next higher order bit.
The Boolean expressions for difference (D) and carry (C) are given by,
D = AB AB = A B
C = AB
A (A + B)
A D D
B
Half
Subtractor (A B)
B C C
Inputs Outputs
Minuend Subtrahend Difference Borrow
A B D C
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
4.3.3 Full-Subtractor
A full-subtractor is a combinational circuit that performs 3-bit subtraction.
The logic symbol of a full-subtractor is shown in Figure. 4.6(a). It has three
inputs, An (minuend), Bn (Subtrahend) and Cn–1 (borrow from previous state) and
two outputs D (difference) and Cn (borrow). The truth table for a full-subtractor is
given in Table 4.4.
Self-Instructional
62 Material
Logical Circuits
An D
Bn Full
Subtractor
Cn–1 Cn
(a) Logic Symbol NOTES
Cn–1 D2 D
Half
Subtractor
An D1 2 C2
Half
Subtractor Cn
Bn 1 C1
Bn G1 D
Cn–1
G1
G2 G4 Cn
G3
Inputs Outputs
Minuend Subtrahend Subtrahend Difference Borrowout
An Bn Cn–1 D Cn
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1
Self-Instructional
Material 63
Logical Circuits The minterms taken from the truth table gives the Boolean expression (SOP)
for difference D and is given by,
D = An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1
Simplifying, D = ( An Bn An Bn )Cn 1 ( An Bn An Bn )Cn 1
NOTES
= ( An Bn ) Cn 1 ( An Bn ) Cn 1
or, D = An Bn Cn–1
Similarly, the sum of product expression for Cn can be written from the truth
table as
C n = An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1
We notice that the equation for D is the same as the sum output for a full-
adder and the output Cn resembles the carry out for full-adder, except that An is
complemented. From these similarities, we understand that it is possible to convert
a full-adder into a full-subtractor by merely complementing An prior to its application
to the input of gates that form the borrow output as shown in Figure 4.8.
An Bn
Cn –1 00 01 11 10
0 0 1 0 0
1 1 1 1 0
4.4 DECODERS
Many digital systems require the decoding of data. Decoding is necessary in such
applications as data multiplexing, rate multiplying, digital display, digital-to-analog
converters and memory addressing. It is accomplished by matrix systems that can
be constructed from such devices as magnetic cores, diodes, resistors, transistors
and FETs.
Self-Instructional
64 Material
A decoder is a combinational logic circuit, which converts binary information Logical Circuits
from n input lines to a maximum of 2n unique output lines such that each output line
will be activated for only one of the possible combinations of inputs. If the n-bit
decoded information has unused or don’t care combinations, the decoder output
will have fewer than 2n outputs. NOTES
A decoder is similar to demultiplexer, with one exception there is no data
input.
A single binary word n digits in length can represent 2n different elements of
information.
An AND gate can be used as the basic decoding element because its output
is HIGH only when all of its inputs are HIGH. For example, the input binary is
1011. In order to make sure that all of the inputs to the AND gate are HIGH when
binary number 1011 occurs, then the third bit (0) must be inverted.
If a NAND gate is used in place of the AND gate, a LOW output will
indicate the presence of the proper binary code.
4.4.1 3-Line-to-8-Line Decoder
Figure shows the reference matrix for decoding a binary word of 3 bits. In this
case, 3-inputs are decoded into eight outputs. Each output represents one of
the minterms of the 3-input variables. A 3-bit binary decoder whose control
equations are implemented in Figure 4.9. The operation of this circuit is listed in
Table 4.5.
Table 4.5 Truth Table for 3-to-8 Line Decoder
Inputs Outputs
A B C D0 D1 D2 D3 D4 D5 D6 D7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1
Self-Instructional
Material 65
Logical Circuits A B C
A B C
NOTES D0 = ABC
D1 = ABC
D2 = ABC
D3 = ABC
D4 = ABC
D5 = ABC
D6 = ABC
D7 = ABC
4.5 ENCODERS
AN–1 ON–1
n-inputs m-bit
only one HIGH output code
at a time
Fig. 4.10 Block Diagram of Encoder
An encoder has n input lines only one of which is active at any time and m output
lines. It encodes one of the active inputs such as a decimal or octal digit to a coded
Self-Instructional
66 Material
output such as binary or BCD. Encoders can also be used to encode various Logical Circuits
symbols and alphabetic characters. The process of converting from familiar symbols
or numbers to a coded format is called encoding. In an encoder, the number of
outputs is always less than the number of inputs. The block diagram of an encoder
is shown in Figure 4.10. NOTES
Y0 = D1 + D3 + D5 + D7
Y1 = D2 + D3 + D6 + D7
Y2 = D4 + D5 + D6 + D7
The design is made simple by the fact that only eight out of the total 2n possible
input conditions are used. Table 4.6 shows the truth table for octal-to-binary
encoder.
Table 4.6 Truth Table Octal-to-Binary Encoder
Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1
Self-Instructional
Material 67
Logical Circuits
4.6 MULTIPLEXER
This type of encoder has ten inputs-one for each decimal input and four outputs
NOTES corresponding to the BCD code, as shown in Figure 4.12. The truth table for a
decimal-to-BCD encoder is given in Table 4.7. From the truth table, we can
determine the relationship between each BCD input and the decimal digits. For
example, the most significant bit of the BCD code, D is a 1 for decimal digit 8 or
9. The OR expression for bit D in terms of decimal digits can therefore the written
D=8+9
The output C is HIGH for decimal digits 4, 5, 6 and 7 and can be written as,
C =4 + 5 + 6 + 7
0
1
2 1
Decimal 3 2 BCD
input output
4 4
5 8
6
7
Self-Instructional
68 Material
9 8 7 6 5 4 3 2 1 0 Logical Circuits
A (LSB)
NOTES
B
D (MSB)
4.7 DEMULTIPLEXER
Output
n-input signal
signals Multiplexer
Figure 4.16 shows the logic diagram of 2-input multiplexer with strobe input.
S
Select Strobe or Enable
E
G5
A
G1
Input G3
lines
G4 Y
B
G2
Self-Instructional
70 Material
When the strobe input E is at logic 0, the NOT gate G5 is 1 and all AND gates G1 Logical Circuits
and G2 are enabled. Accordingly, when S = 0 and 1, inputs A and B are selected
as before. When the strobe input E = 1, all lines are disabled and the circuit will
not function.
NOTES
4.7.2 Four-Input Multiplexer
A logic symbol and diagram of a 4-input multiplexer are shown in Figure 4.17. It
has two data select lines S0 and S1 and four data input lines. Each of the four data
input lines is applied to one input of an AND gate.
Depending on S0 and S1 being 00, 01, 10 or 11, data from input lines A to D
are selected in that order. The Boolean expression for the output is given by the
Table 4.8.
Table 4.8 Truth Table for Function Table
S1 S0 Y
0 0 A
0 1 B
1 0 C
1 1 D
S1 S0
G5 G6
A
G1
B
Data select lines G2
S1 S0 Y
G7
C
G3
A
Input B 4×1 Output D
Y
lines C MUX G4
D
Y = A S0 S1 BS 0 S1 CS0 S1 DS 0 S1
If S0S1 = 00 (binary 0) is applied to data select lines, the data on input A appears
on the data output line.
Self-Instructional
Material 71
Logical Circuits Y =A . 1 . 1+ B . 0 . 1+ C . 1 . 0+D . 0 . 0
= A (Gate G1 is enabled)
Similarly, Y = BS0 S1 = B . 1 . 1 = B when S1S0 = 01 (Gate G2 is enabled)
NOTES Y = CS0 S1 = C . 1 . 1 = C when S1S0 = 10 (Gate G3 is enabled)
Y = DS0S1 = D . 1 . 1 = D when S1S0 = 11 (Gate G4 is enabled)
In a similar style, we can construct 8 × 1 MUXes, 16 × 1 MUXes, etc. Nowadays
two-, four-, eight- and 16-input multiplexes are readily available in the TTL and
CMOS logic families. These basic ICs can be combined for multiplexing a larger
number of inputs.
MultiplexerApplications: Multiplexer circuits find numerous applications in digital
systems. These applications include data selection, data rating, operation
sequencing, parallel to several conversion, waveform generation and logic function
generation.
4.8 FLIP-FLOPS
Synchronous circuits change their states only when clock pulses are present. The
operation of the basic latch can be modified, by providing an additional control
input that determines, when the state of the circuit is to be changed. The latch with
the additional control input is called the flip-flop. The additional control input is
either the clock or enable input.
Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types, namely, S-
R, J-K, D and T flip-flops.
4.8.1 S-R Flip-Flop
The S-R flip-flop consists of two additional AND gates at the S and R inputs of
S-R latch as shown in Figure 4.18.
gates are LOW and the changes in S and R inputs will not affect the output (Q ) of
the flip-flop. When the clock input becomes HIGH, the value at S and R inputs will
be passed to the output of the AND gates and the output (Q ) of the flip-flop will
change according to the changes in S and R inputs as long as the clock input is NOTES
HIGH. In this manner, one can strobe or clock the flip-flop so as to store either a
1 by applying S = 1, R = 0 (to set) or a 0 by applying S = 0, R = 1 (to reset) at any
time and then hold that bit of information for any desired period of time by applying
a LOW at the clock input. This flip-flop is called clocked S-R flip-flop.
The S-R flip-flop which consists of the basic NOR latch and two AND
gates is shown in Figure 4.19.
The S-R flip-flop which consists of the basic NAND latch and two other
NAND gates is shown in Figure 4.20. The S and R inputs control the state of the
flip-flop in the same manner as described earlier for the basic or unclocked S-R
latch. However, the flip-flop does not respond to these inputs until the rising edge
of the clock signal occurs. The clock pulse input acts as an enable signal for the
other two inputs. The outputs of NAND gates 1 and 2 stay at the logic 1 level as
long as the clock input remains at 0. This 1 level at the inputs of NAND-based
basic S-R latch retains the present state, i.e., no change occurs. The characteristic
table of the S-R flip-flop is shown in truth table of Table 4.9 which shows the
operation of the flip-flop in tabular form.
Self-Instructional
Material 73
Logical Circuits Table 4.9 Characteristic Truth Table of S-R Flip-Flop
4.8.2 D Flip-Flop
The D (delay) flip-flop has only one input called the Delay (D ) input and two
outputs Q and Q . It can be constructed from an S-R flip-flop by inserting an
inverter between S and R and assigning the symbol D to the S input. The structure
of D flip-flop is shown in Figure 4.21(a). Basically, it consists of a NAND flip-flop
with a gating arrangement on its inputs. It operates as follows:
1. When the CLK input is LOW, the D input has no effect, since the set and
reset inputs of the NAND flip-flop are kept HIGH.
2. When the CLK goes HIGH, the Q output will take on the value of the D
input. If CLK =1 and D =1, the NAND gate-1 output goes 0 which is the
S input of the basic NAND-based S-R flip-flop and NAND gate-2 output
goes 1 which is the R input of the basic NAND-based S-R flip-flop.
Therefore, for S = 0 and R = 1, the flip-flop output will be 1, i.e., it follows
D input. Similarly, for CLK=1 and D = 0, the flip-flop output will be 0. If D
changes while the CLK is HIGH, Q will follow and change quickly.
The logic symbol for the D flip-flop is shown in Figure 4.21(b). A simple
way of building a delay D flip-flop is shown in Figure 4.21(c). The truth table of D
flip-flop is given in Table 4.10 from which it is clear that the next state of the flip-
flop at time (Q n 1 ) follows the value of the input D when the clock pulse is applied.
As transfer of data from the input to the output is delayed, it is known as Delay
(D ) flip-flop. The D-type flip-flop is either used as a delay device or as a latch to
store 1 bit of binary information.
Self-Instructional
74 Material
Logical Circuits
NOTES
(a) Using NAND Gates (b) Logic Symbol (c) Using S-R Flip-Flop
1 0 0
1 1 1
0 X No change
From the above state diagram, it is clear that when D =1, the next state will
be 1; when D = 0, the next state will be 0, irrespective of its previous state. From
the state diagram, one can draw the Present state–Next state table and the
application or excitation table for the Delay flip-flop as shown in Table 4.11 and
Table 4.12 respectively.
Table 4.11 Present State–Next State Table for D Flip-Flop
0 0 0
0 1 1
1 0 0
1 1 1
Self-Instructional
Material 75
Logical Circuits Table 4.12 Application or Excitation Table for D Flip-Flop
Qn Qn 1
Excitation Input
D
0 0 0
NOTES 0 1 1
1 0 0
1 1 1
Using the Present state–Next state table, the K-map for the next state (Q n 1 )
of the Delay flip-flop can be drawn as shown in Figure 4.23 and the simplified
expression for Q n 1 can be obtained as described below..
From the above K-map, the characteristic equation for Delay flip-flop is,
Qn 1 D
Hence, in a Delay flip-flop, the next state follows the Delay input as
represented by the characterisitic equation.
4.8.3 J-K Flip-Flop
A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In addition,
the indeterminate condition of the S-R flip-flop is permitted in it. Inputs J and K
behave like inputs S and R to set and reset the flip-flop, respectively. When J = K =
1, the flip-flop output toggles, i.e., switches to its complement state; if Q = 0, it
switches to Q =1 and vice versa.
A J-K flip-flop can be obtained from the clocked S-R flip-flop by augmenting
two AND gates as shown in Figure 4.24(a). The data input J and the output Q are
applied to the first AND gate, and its output ( J Q) is applied to the S input of S-R
flip-flop. Similarly, the data input K and the output Q are connected to the second
AND gate and its output (KQ ) is applied to R input of S-R flip-flop. The graphic
symbol of J-K flip-flop is shown in Figure 4.24(b) and the truth table is shown in
Table 4.13. The output for the four possible input sequences are as follows.
Self-Instructional
76 Material
Logical Circuits
NOTES
(a) J-K Flip-Flop using S-R Flip-Flop (b) Graphic Symbol of J-K Flip-Flop
X 0 0 Qn No change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn Toggle
From the above state diagram, one can easily understand that the state
transition from 0 to 1 takes place whenever J is asserted (i.e., J =1 ) irrespective
of K value. Similarly, state transition from 1 to 0 takes place whenever K is asserted
(i.e., K = 1) irrespective of the value of J. Also, the state transition from 0 to 0
occurs whenever J = 0, irrespective of the value of K and the state transition from
1 to 1 occurs whenever K = 0, irrespective of J value.
From the above state diagram and truth table (Table 4.13) of J-K flip-flop,
the Present state–Next state table and application table or excitation table for J-K
flip-flop are shown in Table 4.14 and Table 4.15, respectively.
Self-Instructional
Material 77
Logical Circuits Table 4.14 Present State–Next State Table for J-K Flip-Flop
From the Table 4.14, a Karnaugh map (K-Map) for the next state transition
(Q n 1 ) can be drawn as shown in Figure 4.25 and the simplified logic expression
which represents the characteristic equation of J-K flip-flop can be obtained as
follows.
From the K-map shown in Figure 5.9, the characteristic equation of J-K
flip-flop can be written as,
Qn 1 JQ n KQ n
4.8.4 T Flip-Flop
Another basic flip-flop, called the T or Trigger or Toggle flip-flop, has only a
single data (T) input, a clock input and two outputs Q and Q. The T--type flip-flop
is obtained from a J-K flip-flop by connecting its J and K inputs together. The
designation T comes from the ability of the flip-flop to ‘toggle’ or complement its
state.
Self-Instructional
78 Material
The block diagram of a T flip-flop and its circuit implementation using a J- Logical Circuits
K flip-flop are shown in Figure 4.27. The J and K inputs are wired together. The
truth table for T flip-flop is shown in Table 4.16.
NOTES
(a) Block Diagram of T Flip-Flop (b) T Flip-Flop using a J-K Flip Flop
When the T input is in the 0 state (i.e., J = K = 0) prior to a clock pulse, the
Q output will not change with clocking. When the T input is at 1(i.e., J = K = 1)
level prior to clocking, the output will be in the Q state after clocking. In other
words, if the T input is a logical 1 and the device is clocked, then the output will
change state regardless of what output was prior to clocking. This is called Toggling
hence the name T flip-flop is given.
Table 4.16 Truth Table of T Flip-Flop
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
The above truth table shows that when T = 0, then Q n 1 =Q n , i.e., the next
state is the same as the present state and no change occurs. When T = 1, then
Q n 1 = Q n , i.e., the state of the flip-flop is complemented.
Self-Instructional
Material 79
Logical Circuits From the above state diagram, it is clear that when T = 1, the flip-flop
changes or toggles its state irrespective of its previous state. When T = 1 and Q n =
0, the next state will be 1 and when T = 1 and Q n = 1, the next state will be 0.
NOTES Similarly, one can understand that when T = 0, the flip-flop retains its previous
state. From the above state diagram, one can draw the Present state–Next state
table and application or excitation table for the Trigger flip-flop as shown in Table
4.17 and Table 4.18, respectively.
Table 4.17 Present State–Next State Table for T Flip-Flop
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
From the Table 4.17, the K-map for the next state (Q n 1 ) of Trigger flip-
flop can be drawn as shown in Figure 4.29 and the simplified expression for Q n 1
can be obtained as follows.
From the K-map shown in Figure 4.29, the characteristic equation for Trigger
flip-flop is,
Qn 1 TQ n TQ n
So, in a Trigger flip-flop, the next state will be the complement of the previous
state when T = 1.
4.8.5 Master–Slave Flip-Flops
A Master–Slave flip-flop can be constructed using two J-K flip-flops as shown in
Figure 4.30. The first flip-flop, called the Master, is driven by the positive edge of
the clock pulse; the second flip-flop, called the Slave, is driven by the negative
edge of the clock pulse. Therefore, when the clock input has a positive edge, the
Self-Instructional
80 Material
master acts according to its J-K inputs, but the slave does not respond since it Logical Circuits
requires a negative edge at the clock input. When the clock input has a negative
edge, the slave flip-flop copies the master outputs. But the master does not respond
to the feedback from Q and Q , since it requires a positive edge at its clock input.
NOTES
Thus, the Master–Slave flip-flop does not have race around problem.
If J = 1 and K = 0, the master flip-flop sets on the positive clock edge. The
HIGH Q (1) output of the master drives the input ( J ) of the slave. So, when the
negative clock edge hits, the slave also sets. The slave flip-flop copies the action
of the master flip-flop.
If J = 0 and K = 1, the master resets on the leading edge of the CLK pulse.
The HIGH Q output of the master drives the input (K) of the slave flip-flop. Then,
Self-Instructional
Material 81
Logical Circuits the slave flip-flop resets at the arrival of the trailing edge of the CLK pulse. Once
again, the slave flip-flop copies the action of the master flip-flop.
If J = K = 1, the master flip-flop toggles on the positive clock edge and the
slave toggles on the negative clock edge. The condition J = K = 0 input does not
NOTES
produce any change.
Master–Slave flip-flops operate from a complete clock pulse and the outputs
change on the negative transition.
4.9 REGISTERS
7 8 9
Processing
Decoder
Encoder
Register
Register
4 5 6
Shift
Shift
Uint
1 2 3
0
Self-Instructional
82 Material
There are two modes of operation for registers. The first operation is series Logical Circuits
or serial operation. The second type of operation is parallel shifting. Input
and output functions associated with registers include (1) serial input/serial output
(2) serial input/parallel output (3) parallel input/parallel output (4) parallel input/
serial output. NOTES
Hence input data are presented to registers in either a parallel or a serial
format.
To input parallel data to a register requires that all the flip-flops be affected
(set or reset) at the same time. To output parallel data requires that the flip-flop Q
outputs be accessible. Serial input data loading requires that one data bit at a time
is presented to either the most or least significant flip-flop. Data are shifted from
the flip-flop initially loaded to the neat one in series. Serial output data are taken
from a single flip-flop, one bit at a time.
Serial data input or output operations require multiple clock pulses. Parallel
data operations only take one clock pulse. Data can be loaded in one format and
removed in another. Two functional parts are required by all shift registers: (1)
data storage flip-flops and (2) logic to load, unload and shift the stored information.
The block diagrams of four basic register types is shown in Figure 4.33.
Registers can be designed using-discrete flip-flops (S-R J-K and D-type). Registers
are also available as MSI.
Serial
data n-bit
input
Serial Serial
data n-bit data
input output
MSB LSB
Parallel data outputs
(a) Serial In/Serial Out (b) Serial In/Parallel Out
Parallel data inputs
n-bit
Serial data
n-bit output
Input
data
D C B A
Q K Q K Q K Q K
Shift pulses
(b) D Type
For register of Figure 4.34 (b) using D FFs, a single data line is connected
between states, again, 4 shift pulse are required to shift a 4-bit word into the
4-stage register.
The shift pulse is applied to each stage, operating each simultaneously. When
the shift pulse occurs, the date input is shifted into that stage. Each stage is set or
reset corresponding to the input data at the time of shift pulse occurs. Thus the
input data bit is shifted into stage A by the first shift pulse. At the same time the
data of stage A is shifted into stage B, and so on for the following stages. For each
shift pulse, data stored in the register stages shift left by one stage. News data are
shifted into stage A, where as the data present in stage D are shifted out (to the
left) for use by some other shift register or computer unit.
For example, consider starting with all stages reset and applying a steady
logical-1 input a data input to stage A. The data in each stage after each of four
shift pulses is shown in Table 4.19. Notice in Table 4.19 that the logical-1 input
shifts into stage A and the shifts left to stage D after four shift pulses.
As another example, consider shifting of alternate 0 and 1 data into stage A
starting all stages logical-1. Table 4.19 shows the data in each stage after each of
four shift pulses.
Self-Instructional
84 Material
Table 4.19 Operation of Shift-Left Register Logical Circuits
Shift Pulse D C B A
0 0 0 0 0
1 0 0 0 1 NOTES
2 0 0 1 1
3 0 1 1 1
4 1 1 1 1
As a third example of shift register operation, consider starting with the count
starting with the count in step 4 of Table 4.20 and applying four more shift pulses
while placing a steady logical-0 input as data input to stage A. This is shown in
Table 4.21.
Table 4.20 Shift- Register Operation Table 4.21 Final Stage
Shift Pulse D C B A Shift Pulse D C B A
0 1 1 1 1 0 0 1 0 1
1 1 1 1 0 1 1 0 1 0
2 1 1 0 1 2 0 1 0 0
3 1 0 1 0 3 1 0 0 0
4 0 1 0 1 4 0 0 0 0
Q Q Q Q
CLK
(a)
QA QB QC QD
J Q J Q J Q J Q
Serial
data
input
K Q K Q K Q K Q
CLK
(b)
Fig. 4.35 J-K Flip-Flops in Shift Right Register
When the second clock pulse occurs, the 0 on the data input is “shifted” into the
FF A because FF A RESETs, and the 1 that was in FF A is “shifted” into FF B.
The next 1 in the binary number is now put onto the data-input line, and a clock
pulse is applied. The l is entered into FF A, the 0 stored in FF A is shifted into FF
Self-Instructional
Material 85
Logical Circuits B, and the l stored in FF B is shifted into FF C. The last bit in the binary number,
a l, is now applied to the data input, and a clock pulse is applied. This time the l is
entered into FF A, the l stored in FF A is shifted into FF B, the 0 stored in FF B is
shifted into FF C, and the l stored in FF C is shifted into FF D. This completes the
NOTES serial entry of the 4-bit binary number into the shift register, where it can be stored
for any amount of time. Table 4.23 shows the action of shifting all logical-l inputs
into an initially reset shift register. Table 4.22 shows the register operation for the
entry of 1101.
Table 4.22 Register Operation Table 4.23 Shifting Logical Inputs
Shift Pulse QA QB QC QD Shift Pulse QA QB QC QD
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 0 0 0
2 0 1 0 0 2 1 1 0 0
3 1 0 1 0 3 1 1 1 0
4 1 1 0 1 4 1 1 1 1
The waveforms shown in Figure 4.36 illustrate the entry of 4-bit number 0100.
For a J-K FF, the data bit to be shifted into the FF must be present at the J and
K inputs when the clock transitions (low or high). Since the data bit is either a l or
a 0, there are two cases:
1. To shift a 0 into the FF, J = 0 and K = 1,
2. To shift a l into the FF, J = 1 and K = 0,
At time A : All the FFs are reset. The FF output just after time A are
QRST = 0000.
At time B : The FFs all contain 0s, the FF outputs are QRST = 0000.
A B C D
Time
Clock 0
J
0
Serial
data
input
K 0
Q
0 0
1
R
0
S 0
0
T 0
0
CLK input
QA QB QC QD
(a) Logic Diagram
Data input
D SRG 4
CLK
QA QB QC QD
(b) Logic Symbol
Self-Instructional
Material 87
Logical Circuits
When SHIFT/ LOAD is HIGH, AND gates through G1 through G3 are
disabled and AND gates G4 through G6 are enabled, allowing the data bits to shift
right from one stage to the next. The OR gates allow either the normal shifting
operation or the parallel data-entry operation, depending on which AND gates
NOTES
are enabled by the level on the SHIFT/ LOAD input.
A B C D
SHIFT/LOAD
G4 G1 G5 G2 G6 G3
D QA D QB D QC D QD
Serial
A B C D data out
CLK
(a) Logic Diagram
Data in
A B C D
SHIFT/LOAD
Data out
SRG 4
CLK
(b) Logic Symbol
Self-Instructional
88 Material
A B C D Logical Circuits
D QA D QB D QC D QD
A B C D
NOTES
CLK
QA QB QC QD
4.10 COUNTERS
A B C C
Clock
K QA K QB K QC K QD
QA QB QC QD
Output
(a) Logic Diagram
Clock
input 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
QA
QB
QC
QD
(b) Waveform Diagram
States
Number of
Clock Pulses QD QC QB QA
0 0 0 0 0 NOTES
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0
MOD–Number or Modulus
The MOD-number (or the modulus) of a counter is the total number of states
which the counter goes through in each complete cycle.
MOD number = 2N
Where N = Number of flip-flops.
The maximum binary counted by the counter is 2N – 1. Thus, a 4 flip-flop counter
can count as high as (1111) = 24 – 1 = 16 – 1 = 1510. The MOD number can be
increased by adding more FFs to the counter.
4.10.2 Synchronous Counter Operations
A synchronous, parallel, or clocked counter is one in which all stages are triggered
simultaneously.
When the carry has to propagate through a chain of n flip-flops, the overall
propagation delay time is ntpd. For this reason ripple counters are too slow for some
application. To get around the ripple-delay problem, can use a synchronous counter.
A 4-bit (MOD-16) synchronous counter, with parallel carry is shown in
Figure 4.42. The clock is connected directly to the CLK input of each flip-flop,
i.e., the clock pulses drive all flip-flops in parallel. In this counter only the LSB flip-
flop A has its J and K inputs connected permanently to VCC, i.e, at the high level.
The J, K inputs of the other flip-flops are driven by some combination of flip-flop
Self-Instructional
Material 91
Logical Circuits outputs. The J and K inputs of the flip-flop B are connected to QA output of flip-
flop. As the J and K inputs of the FF D are connected with AND operated output
of QA, and QB. Similarly, the J and K inputs of FF D are connected with AND
operated output of QA, QB, QC.
NOTES
Clock-input
JA QA JB QB JC QC JD QD
A B C D
KA QA KB QB KC QC KD QD
VCC (1)
For this circuit to count properly, on a given negative transition of the clock, only
those FFs are supposed to toggle on the negative transition should have J = K =
1 when the negative transition occurs. According to the state Table 4.25, FF A is
required to change state with occurrence of each clock pulse. FF B changes its
state when QA=1. The flip-flop QC toggles only when QA = QB=1. And the flip-
flop QD changes state only when QA=QB=QC=1. In other words, a flip–flop toggles
on the next negative transition clock edge if all lower bits are 1s.
The counting action of counter is as follows:
1. The first negative clock edge sets QA to get Q = 0001.
2. Since QA is 1, FF is conditioned to toggle on the next negative clock edge.
3. When the second negative clock edge arrives, QB and QA simultaneously
toggle and the output word becomes Q = 0010. This process continues.
4. By adding more flip-flops and gate we can build synchronous counter of
any length. The advantage of synchronous counter is its speed, it takes only
one propagation delay time for the correct binary count to appear after the
clock edge hits.
Table 4.25 State Table of 4-bit Binary Ripple Counter
State QD QC QB QA
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
Self-Instructional
92 Material
8 1 0 0 0 Logical Circuits
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0 NOTES
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0
4.12 SUMMARY
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
96 Material
CPU Essentials
BLOCK - II
BASICS OF CPU AND BUSES
NOTES
UNIT 5 CPU ESSENTIALS
Structure
5.0 Introduction
5.1 Objectives
5.2 Modern CPU Concepts
5.3 CPU: Circuit Size and Die Size
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings
5.0 INTRODUCTION
In this unit, you will learn about the Central Processing Unit (CPU), circuit size,
Die size and Processor Cooling. A central processing unit (CPU) is the electronic
circuitry within a computer that carries out the instructions of a computer program
by performing the basic arithmetic, logical, control and input/output (I/O) operations.
It is the most important component of the computer. A die, in the context
of integrated circuits, is a small block of semiconducting material on which a given
functional circuit is fabricated. The die size of the processor refers to its physical
surface area size on the wafer. The circuit size or feature size refers to the level of
miniaturization of the processor. To make more powerful processors,
more transistors are needed. A CPU cooler is device designed to draw heat away
from the system CPU and other components in the enclosure.
5.1 OBJECTIVES
Self-Instructional
Material 97
CPU Essentials
5.2 MODERN CPU CONCEPTS
The Central Processing Unit (CPU) is the most important component of the
NOTES computer. The CPU itself is an internal part of the computer system and is usually
a microprocessor-based chip housed on single or at times multiple printed circuit
boards. The CPU is directly inserted on the motherboard and each motherboard
is compatible with a specific series of CPUs only. The CPU generates a lot of heat
and has a heat sink and a cooling fan attached on the top which helps it to disperse
heat.
The market of microprocessors is dominated primarily by Intel and AMD,
both of which manufacture IBM-compatible CPUs. Motorola also manufactures
CPUs for Macintosh-based PCs. Cyrix, another IBM-compatible CPU
manufacturer is next in line after Motorola in the market in terms of global sales.
Types of Processors
The brands of CPUs listed are not the only differentiating factors, between different
processors. There are various technical aspects to these processors, which allow
us to differentiate between CPUs of different power, speed and processing
capability. Accordingly, each of these manufacturers sells numerous product lines
offering CPUs of different architecture, speed, price range, etc. The following are
the most common aspects of modern CPUs that enable us to judge their quality or
performance:
1. 32 or 64-Bit Architecture: A bit is the smallest unit of data that a computer
processes. 32 or 64-bit architecture refers to the number of bits that the
CPU can process at a time.
2. Clock Rate: The speed at which the CPU performs basic operations,
measured in Hertz (Hz) or in modern computers Megahertz – MHz or
Gigahertz – GHz.
3. Number of Cores: CPUs with more than one core are essentially multiple
CPUs running in parallel to enable more than one operation to be performed
simultaneously. Current ranges of CPUs offer up to eight cores. Currently,
the dual core, i.e., two cores CPU is most commonly used for standard
desktops and laptops and Quad core, i.e., four cores, is popular for entry
level servers.
4. Additional Technology or Instruction Sets: These refer to unique features
that a particular CPU or range of CPUs offer to provide additional processing
power or reduced running temperature. These range from Intel’s MMX,
SSE3 and HT to AMD’s 3DNOW and Cool n Quiet.
These technical factors are the basic way to judge how a CPU will perform.
It is important to consider multiple factors when looking at a CPU rather than just
Self-Instructional
98 Material
the clock speed or any one specification on its own. For example, a 64-bit 3GHz CPU Essentials
processor with one core may perform poorly in comparison to a 32-bit 2GHz
processor with two cores. Similarly, different processors are better suited for
different tasks. For instance, Motorola processors are always rated higher for
graphic applications than Intel or AMD processors, which perhaps explains why NOTES
Macintosh uses them for their computers. It is very easy for a single-core processor
to run music videos, the Internet applications or games individually, but when multiple
applications are run together, it starts to slow down. A system running on a dual-
core processor would be able to multitask better then a single-core processor,
while it is very easy for an 8-core processor to run all these applications plus a lot
more without showing any signs of slowing down. However, Intel’s 4-core
processors are actually two dual-core processors combined in a single processor,
whereas AMD’s 4-core processors are actually four processors built in a single
chip.
It is not true that more the number of processors, the faster it gets, but it is
true that more the number of processors, the higher is the processing capability.
Therefore, a combination of the above-mentioned specifications, along with
the operating systems that the processor supports and the specific purpose for
which the computer is to be used, are the factors to be considered when deciding
which CPU is the most suitable for your needs.
Processor Clock
The speed at which the processor executes commands is called the processor
speed or clock speed. Every computer contains an internal clock, (known as the
system clock) that regulates the rate at which the instructions are executed and
synchronizes the various computer components. The processor requires fixed
number of clock cycles (electric pulses per second) to execute each instruction.
Clock cycles are required to fetch, decode, and execute a single program
instruction. Thus, the shorter the clock cycle, the faster the processor.
In a computer, clock speed, therefore refers to the number of pulses per
second generated by an oscillator that sets the tempo for the processor. It is usually
measured in MHz (Megahertz - Million of pulses per second) or GHz (Gigahertz
- Billions of pulses per second).
Computer clock speed has been roughly doubling every year. The Intel
8088, common in computers around the year 1990, ran at 4.77 MHz. Today’s
personal computers run at clock speeds of a 100–1000 MHz and some even
exceed one gigahertz.
Although the processing speed in personal computers is measured in terms
of megahertz, the processing speed of mini computers and mainframe systems is
measured in terms of Millions of Instructions Per Second (MIPS) or Billions of
Instructions Per Second (BIPS). This is because personal computers generally
Self-Instructional
Material 99
CPU Essentials employ a single microprocessor chip as their CPU while other classes of computers
employ multiple processors to speed up their overall performance. Thus, a
minicomputer having a speed of 500 MIPS is capable of executing 500 million
instructions per second.
NOTES
Clock speed is a measure of computer ‘power,’ but it is not always directly
proportional to the performance level. If you double the speed of the clock,
leaving all other hardware unchanged, you will not necessarily double the
processing speed. The type of microprocessor, the bus architecture, and the
nature of the instruction set all make a difference. In some applications the amount
of RAM is important too.
Processor/Computer Cooling
Computer cooling is required to remove the waste heat produced by computer
components, to keep components within permissible operating temperature limits.
Components that are susceptible to temporary malfunction or permanent failure if
overheated include integrated circuits such as central processing units (CPUs),
chipset, graphics cards, and hard disk drives.
Components are often designed to generate as little heat as possible, and
computers and operating systems may be designed to reduce power consumption
and consequent heating according to workload, but more heat may still be
produced than can be removed without attention to cooling. Use of heatsinks
cooled by airflow reduces the temperature rise produced by a given amount of
heat. Attention to patterns of airflow can prevent the development of hotspots.
Computer fans are widely used along with heatsink fans to reduce temperature
by actively exhausting hot air.
Pipelining
Pipelining is a technique that decomposes any sequential process into smaller
subprocesses, which are independent of each other so that each subprocess can
be executed in a special dedicated segment and all these segments operate
concurrently. Thus, the whole task is partitioned into independent tasks and these
subtasks are executed by a segment. The result obtained as an output of a segment
(after performing all computation in it) is transferred to next segment in pipeline
and the final result is obtained after the data have been through all segments. So, it
could be understood each segment consists of an input register followed by a
combinational circuit. This combinational circuit performs the required sub-
operation and register holds the intermediate result. The output of one combinational
circuit is given as input to the next segment.
The concept of pipelining in computer organization is analogous to an industrial
assembly line. As in industry, pipelining also has different divisions like manufacturing,
packing and delivery division. Thus, pipeline results in speeding the overall process.
Self-Instructional
100 Material
Pipelining can be effectively implemented for systems having following CPU Essentials
characteristics:
The system should repeatedly execute a basic function.
The basic function must be divisible into independent stages such that each NOTES
stage has minimal overlap.
The complexity of the stages should be roughly similar.
The pipelining in computer organization has basic flow of information. To
understand how it works for the computer systems let us consider a process which
involves four steps/segment and the process is to be repeated six times. If a single
step takes t nsec time, then time required to complete one process is 4 t nsec and
to repeat it 6 times we require 24t nsec.
Instruction Sets: CISC and RISC
Along with the reduction in the hardware prices and the increase in the number of
computer instructions, the complexity of the computer system has also increased.
New models can provide more customer-based applications. It is easier to add
more instruction to facilitate the translation from high level language into machine
language program. A computer with a large number of instructions is classified as
Complex Instruction Set Computer (CISC).
Complex Instruction Set Computer (CISC)
While designing the CISC computers, the main concern was to develop a program
using high-level language. For this, it was required to translate high-level language
into machine-level language with the help of a compiler. Thus, the basic purpose of
developing a CISC was to provide simplification in compilation. The goal of CISC
architecture was to provide a single instruction for each statement written in a
high-level language. The CISC processor was so designed that it was easy to
program and make an efficient use of memory. This technology was commonly
implemented in such large computers as the PDP-11 and the DEC systems. The
CISC philosophy became popular with the advent of the Intel family. Most of
common microprocessor designs, such as the Intel 80x86 and Motorola 68K
series, are designed on the CISC philosophy. However, current trends are the
hybrids of CSIC and RSIC technologies. Thus, CISC and RSIC share many
principles.
Initially, CISC machines used available technologies to optimize computer
performance. Later, the microprogramming control units were designed. Written
in assembly language and easy to implement, it is much less expensive than a
hardwiring control unit. Because of the ease of microprogramming, new
instructions can easily be introduced allowing the designers to make CISC
machines compatible with pervious one. Now a computer designed with new
microprogramming code could run the same programs which were executed in
earlier computers as it contains a superset of the instructions of the earlier
Self-Instructional
Material 101
CPU Essentials computers. As these microprogrammed instruction sets are written with an aim
to match the programming done by the high-level languages, the design of the
compiler should not be complicated.
Self-Instructional
Material 103
CPU Essentials Fixed-length instructions, i.e., all instructions have the same length of 32
bits (or 64 bits).
Small number of the instruction format; less than 2.
NOTES Fields aligned in instruction to allow fast instruction decoding.
Presence of both operands in registers to allow short fetch time.
Large number of General Purpose Registers (GPRs); more than 32.
Only one main memory access per instruction.
Onlyread/write (load/store) instructions to access the main memory.
Translation of the complex tasks into simple operations by the compiler,
increasing the compiler complexity and compiling time.
Compiler in RISC processors not developed for a specific chip; instead, it
is developed in conjunction with the chip to produce one unit.
Simpler and faster hardware implementation.
Very suitable for pipelined architecture.
Single cycle execution.
Design dominated by hardwired control unit.
Supportive to the High-Level Language (HLL).
All operations related to registers task.
Registers managed in the form of a variable window in some RISC
processors, allowing a ‘look’ at certain register files instead of using registers
as AX, BX, etc.
Advantage of RISC Machines
RISC machine has the following advantages over CISC machine:
Smaller instruction set.
Single-cycle execution resulting in faster execution.
Fast instruction decoding because of fixed format.
Easy implementation of pipelining in instruction through interleaving many
instructions.
Memory access done only by load/store instructions; execution of all other
instructions using internal registers only.
Simple design and short design time.
Best target for the state-of-the-art optimizing compilation techniques.
Simplified interrupt service logic.
Self-Instructional
104 Material
CPU Essentials
5.3 CPU: CIRCUIT SIZE AND DIE SIZE
The circuit size or feature size refers to the level of miniaturization of the processor.
To make more powerful processors, more transistors are needed. In order to NOTES
pack more transistors into the same space, they must be continually made smaller
and smaller.
Die size
The die or processor die is a rectangular pattern on a wafer that contains circuitry
to perform a specific function. For example, the picture to the right shows hundreds
of dies on the silicon wafer. A die, in the context of integrated circuits, is a small
block of semiconducting material on which a given functional circuit is fabricated.
Typically, integrated circuits are produced in large batches on a single wafer of
electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through
processes such as photolithography. The wafer is cut (“diced”) into many pieces,
each containing one copy of the circuit. Each of these pieces is called a die.
5.5 SUMMARY
Self-Instructional
Material 105
CPU Essentials The speed at which the processor executes commands is called the processor
speed or clock speed.
Pipelining is a technique that decomposes any sequential process into smaller
subprocesses.
NOTES
Computer cooling is required to remove the waste heat produced by
computer components.
The system clock switches from zero and one at a million times per second
rate.
Overclocking a CPU is the process of increasing the clock speed that the
CPU operates.
RISC is a type of microprocessor that is designed with limited number of
instructions.
CISC computers, the main concern was to develop a program using high-
level language.
Self-Instructional
Material 107
Computer Memory
6.0 INTRODUCTION
6.1 OBJECTIVES
The memory hierarchy consists of the total memory system of any computer. The
memory components range from higher capacity slow auxiliary memory to a
relatively fast main memory to cache memory that can be accessible to the high
speed processing logic. A five-level memory hierarchy is shown in Figure 3.1.
At the top of this hierarchy, there is a Central Processing Unit (CPU) register
which is accessed at full CPU speed. This is local memory to the CPU as the CPU
requires it. Next comes cache memory which is currently on the order of 32 KB
to few megabytes. After that, is the main memory with sizes currently ranging from
16 MB for an entry level system to few gigabytes at the other end. Next are
magnetic disks, and finally we have magnetic tape and optical tapes.
Self-Instructional
108 Material
The memory, as we move down the hierarchy, mainly depends on the Computer Memory
Registers
Cache
Main Memory
Magnetic Disk
Self-Instructional
Material 109
Computer Memory Thus, the overall goal of using a memory hierarchy is to obtain the highest
possible average speed while minimizing the total cost of the entire memory system.
Magnetic
Tapes I/O
NOTES
Processor Main
Magnetic
Memory
Disk
Cache
CPU
Memory
characteristics of SRAM:
It is a type of semiconductor memory.
It does not require any external refresh circuitry in order to keep data intact. NOTES
SRAM is used for high speed registers, caches and small memory banks
such as router buffers.
It has access time in the range of 10 to 30 nanoseconds and hence allows
for very fast access.
It is very expensive.
Dynamic RAM or DRAM
Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit. This circuitry reads the contents of
each memory cell many hunderds of times per second to find out whether the
memory cell is being used at that time by computer or not. Due to the way in which
the memory cells are constructed, the reading action itself refreshes the contents
of the memory. If this is not done regularly, then DRAM will lose its contents even
if it continues to have power supplied to it. Because of this refreshing action, the
memory is called dynamic. The following are the characteristics of DRAM:
It is the most common type of memory in use. All Personal Computers or
PCs use DRAM for their main memory system instead of SRAM.
DRAM has much higher capacity.
It is much cheaper than SRAM.
It is slower than SRAM because of the overhead of the refresh circuitry.
ROM
In every computer system, there is a portion of memory that is stable and impervious
to power loss. This type of memory is called Read Only Memory or in short
ROM. It is non-volatile memory, i.e., information stored in it is not lost even if the
power supply goes off. It is used for permanent storage of information and it
possesses random access property.
The most common application of ROM is to store the computer’s Basic
Input-Output System (BIOS). Since the BIOS is the code that tells the processors
to access its resources on powering up the system. Another application is the
code for embedded systems.
There are different types of ROMs. They are as follows:
PROM or Programmable Read Only Memory: Data is written into a
ROM at the time of manufacture. However, the contents can be programmed
by a user with a special PROM programmer. PROM provides flexible and
economical storage for fixed programs and data.
Self-Instructional
Material 111
Computer Memory EPROM or Erasable Programmable Read Only Memory: This allows
the programmer to erase the contents of the ROM and reprogram it. The
contents of EPROM cells can be erased using ultra violet light using an
EPROM programmer. This type of ROM provides more flexibility than
NOTES ROM during the development of digital systems. Since they are able to
retain the stored information for longer duration, any change can be easily
made.
EEPROM or Electrically Erasable Programmable Read Only
Memory: In this type of ROM, the contents of the cell can be erased
electrically by applying a high voltage. EEPROM need not be removed
physically for reprogramming.
Organization of RAM and ROM Chips
A RAM chip is best suited for communication with the CPU if it has one or more
control lines to select the chip only when needed. It has a bidirectional data bus
that allows the transfer of data either from the memory to the CPU during a read
operation or from the CPU to the memory during a write operation. This
bidirectional bus can be constructed using a three-state buffer. A three-state buffer
has the following three possible states:
Logic 1
Logic 0
High impedance
The high impedance state behaves like an open circuit which means that the
output does not carry any signal. It leads to very high resistance and hence no
current flows.
Chip select 1 CS1
Chip select 2 CS2
Read RD 128 × 8 8-bit data bus
RAM
Write WR
7-bit Address AD7
Figure 6.3 shows the block diagram of a RAM chip. The capacity of memory
is 128 words of 8 bits per word. This requires a 7-bit address and an 8-bit
bidirectional data bus.
RD and WR are read and write inputs that specify the memory operations
during read and write, respectively. Two Chip Select (CS) control inputs are for
enabling the particular chip only when it is selected by the microprocessor. The
operation of the RAM chip will be according to the function table as shown in Table
6.1. The unit is in operation only when CS1 = 1 and CS2 = 0. The bar on top of the
second chip select indicates that this input is enabled when it is equal to 0.
Self-Instructional
112 Material
Table 6.1 Function Table Computer Memory
Thus, if the chip select inputs are not enabled, or if they are enabled but
read and write lines are not enabled, the memory is inhibited and its data bus is in
high impedance. When chip select inputs are enabled, i.e., CS1 = 1 and CS2 = 0,
the memory can be in the read or write mode. When the write WR input is enabled,
a byte is transferred from the data bus to the memory location specified by the
address lines. When the read RD input is enabled, a byte from the specified memory
location by the address line is placed into the data bus.
A ROM chip is organized in the same way as a RAM chip. The block
diagram of a ROM chip is shown in Figure 6.4.
The two chip select lines must be CS1 = 1 and CS2 = 0 for the unit to be
operational. Otherwise, the data bus is in high impedance. There is no need for the
read and write input control because the unit can only read. Thus, when the chip is
selected, the byte selected by the address line appears to be in the data bus.
Memory Address Map
A table called a memory address map is a pictorial representation of the assigned
address space for each chip in the system.
The interconnection between the memory and the processor is established
from the knowledge of the size of memory required and the types of RAM and
ROM chips available. RAM and ROM chips are available in a variety of sizes. If
a memory needed for the computer is larger than the capacity of one chip, it is
necessary to combine a number of chips to get the required memory size. If the
required size of the memory is M × N and if the chip capacity is m × n, then the
number of chips required can be calculated as
Self-Instructional
Material 113
Computer Memory
M N
k =
mn
Suppose, a computer system needs 512 bytes of RAM and 512 bytes of
NOTES ROM. The capacity of RAM chip is 128 × 8 and that of ROM is 512 × 8. Hence,
the number of RAM chips required will be:
512 8
k = = 4 RAM chips
128 8
One ROM chip will be required by the computer system. The memory
address map for the system is illustrated in Table 3.2. Table 3.2 consists of three
columns. The first column specifies whether a RAM or a ROM chip is used. The
next column specifies a range of hexadecimal addresses for each chip. The third
column lists the address bus lines.
Table 6.2 Memory Address Map
Table 6.2 shows only 10 lines for the address bus although the address bus
consists of 16 lines. These six lines are assumed to be zero. The RAM chip has
128 bytes and needs seven address lines, which are common to all four RAM
chips. The ROM chip has 512 bytes and needs nine address lines. Thus, ×’s are
assigned to the low-order bus lines, line 1 to 7 for the RAM chip and lines 1
through 9 for the ROM chip, where these ×’s represent a binary number, which is
a combination of all possible 0’s and 1’s values. Also, there must be a way to
distinguish between the four RAM chips. Lines 8 and 9 are used for this purpose.
If lines 9 and 8 are 00, it is a RAM1 chip; if it is 01, it is a RAM2 chip; if it is 10,
it is RAM3 chip and if the value of lines 9 and 8 is 11, it is RAM4 chip. Also, the
distinction between RAM and ROM is required. This distinction is done with the
help of line 10. When line 10 is 0, the CPU selects one of the RAM chips, and
when line 10 is 1, the CPU selects the ROM chip.
Memory Connection to CPU
The CPU connects the RAM and ROM chips through the data and address buses.
Low-order lines within the address bus select the byte within the chip and other
Self-Instructional
114 Material
lines select a particular chip through its chip select lines. The memory chip Computer Memory
connection to the CPU is shown in Figure 6.5. Seven low-order bits of the address
bus are connected directly to each RAM chip to select one of 128 possible bytes.
CPU NOTES
Address bus
16-11 10 9 8 7-1 RD WR Data bus
2×4
Decoder
3210
CS1
CS2 RAM1 Data
RD 128 × 8
WR
AD7
CS1
CS2
RAM2 Data
RD 128 × 8
WR
AD7
CS1
CS2 RAM3 Data
RD 128 × 8
WR
AD7
CS1
CS2 RAM4 Data
RD 128 × 8
WR
AD7
CS1
CS2
ROM Data
512 × 8
AD9
6.5 SUMMARY
Memory is the faculty of the brain by which information is encoded, stored,
and retrieved when needed.
RAM is the main memory of a computer system. Its purpose is to store
data and applications that are currently in use by the processor.
SRAM is a type of RAM that holds its data without external refresh as long
as power is supplied to the circuit.
Dynamic RAM that only holds its data if it is continuously accessed by
special logic called refresh circuit.
ROM (Read Only Memory) which we can only read but cannot write on it.
This type of memory is non-volatile. The information is stored permanently
in such memories during manufacture.
Self-Instructional
116 Material
A table called a memory address map is a pictorial representation of the Computer Memory
Self-Instructional
Material 117
Bus
UNIT 7 BUS
NOTES Structure
7.0 Introduction
7.1 Objectives
7.2 Bus Interface and Expansion Slots
7.2.1 Industry Standard Architecture
7.2.2 Extended Industry Standard Architecture
7.2.3 Micro Channel Architecture
7.2.4 Video Electronics Standards Association
7.2.5 Peripheral Component Interconnect or Personal Computer Bus
7.2.6 Accelerated Graphics Port
7.3 FSB
7.4 USB
7.5 Dual Independent Bus
7.6 Answers to Check Your Progress Questions
7.7 Summary
7.8 Key Words
7.9 Self Assessment Questions and Exercises
7.10 Further Readings
7.0 INTRODUCTION
In this unit, you will learn about expansion slots, USB and dual independent bus.
An expansion slot is a socket on the motherboard that is used to insert an expansion
card (or circuit board), which provides additional features to a computer such as
video, sound, advanced graphics, Ethernet or memory. The expansion card has
an edge connector that fits precisely into the expansion slot as well as a row of
contacts that is designed to establish an electrical connection between the
motherboard and the electronics on the card, which are mostly integrated circuits.
Depending on the form factor of the case and motherboard, a computer system
generally can have anywhere from one to seven expansion slots. There are several
types of expansion slots, including AGP (Accelerated Graphics Port), PCIe (also
known as PCI express), PCI (Peripheral Component Interconnect) and ISA
(Industry Standard Architecture).
7.1 OBJECTIVES
In Figure 7.1, you can see how PCI expansion slots are displayed with the help
of numbers and AGP too. Developed to support 3D graphic applications, AGP
has a 32-bit wide channel that runs at 66 MHz quad pumped, which translates
into a total bandwidth of 1.06 GB/sec (1.056 GB/sec), which is four times the
bandwidth of PCI buses (133 MB/sec). The motherboard has various types of
expansion slots, Peripheral Component Interconnect (PCI) and Accelerated
Graphics Port (AGP), as shown in the Figure 7.1.
Self-Instructional
Material 119
Bus
NOTES
AGP also accesses the main memory directly, allowing 3D textures to be stored in
main memory as well as in video memory. Each node has an AGP Pro 50 slot that
enables users to install both AGP and AGP Pro cards in the system. In the SGI
graphics cluster nodes, the slot is occupied by the graphics card. The motherboard
has five PCI slots that support 32-bit 33-MHz PCI devices. Expansion cards in
the SGI graphics cluster reside in specific slots, as outlined in the Table 7.2, which
are based on Figure 7.2.
Table 7.2 Expansion Slots and Cards in the SGI Graphics Cluster
Slot Card
AGP AGP represents graphics card.
PCI slot 1 (closest to the AGP This slot represents an optional
slot) gigabit Ethernet card.
PCI slot 2 This slot represents represents SGI
ImageSync card, i.e., SGI graphics
cluster series 12 only.
PCI slot 3 This slot refers to Network
Interface Card (NIC). The card is
basically secondary Ethernet,
master node only and included
with SGI graphics cluster series
12.
PCI slot 4 This slot represents empty and
available slot for customer option.
PCI slot 5 (closest to the chassis This slot represents the
wall) commercial audio card. It is
included with master node of SGI
graphics cluster series 12.
Self-Instructional
120 Material
7.2.1 Industry Standard Architecture Bus
The expansion ports are generally sited in the back of the motherboard and aligned
with the back-plate of the case. It is worked via several cards, such as sound
cards, network cards, wireless cards and firewire cards. Each type of card has NOTES
different function for allotted expansion slot. This standard is provided by add-on
card for extending the capabilities of a PC. The original bus was 8-bit but it was
replaced by a 16-bit architecture in the mid 1980s. At the time of final displacement
of PCI bus, the Industry Standard Architecture (ISA) bus was survived. When it
appeared on the first PC, the 8-bit ISA bus ran at a modest 4.77MHz, which is
the same speed as the processor. It was improved over the years, eventually
becoming the ISA bus in 1982 with the advent of the IBM PC/AT using the Intel
80286 processor and 16-bit data bus. At this stage, it is kept up with the speed of
the system bus, in which first speed prefers the speed at 6MHz and second speed
works at 8MHz. Both types of speed are preferable as per requirement. The ISA
bus specifies a 16-bit connection driven by an 8MHz clock, which seems primitive
compared with the speed of latest processors. It has a theoretical data transfer
rate of up to 16 MBps. Functionally, this rate would reduce by a half to 8 MBps
since one bus cycle is required for addressing and a further bus cycle for the 16-
bits of data. ISA bus provides a 16-bit data bus. In Figure 7.2, the ISA bus is
characterized by adding an additional short slot to a slot on the 8-bit bus to create
the 16-bit connector. ISA is supported by eight additional IRQs that is doubled
the number of DMA channels. ISA expansion cards are designated to the
appropriate IRQ or DMA numbers through jumpers and DIP switches. The ISA
architecture also separated the bus clock from the CPU clock to allow the slower
data bus to operate at its own speeds. ISA slots are found on 286, 386, 486 and
some Pentium PCs. Figure 7.2 shows the ISA 16-bit card and slot.
Figure 7.4 illustrates that PCI cards use 47 pins to connect in which 49 pins are
used for a mastering card, which can control the PCI bus without CPU intervention.
The PCI bus is able to work with so few pins because of hardware multiplexing,
Self-Instructional
124 Material
which means that the device sends more than one signal over a single pin. Also, Bus
PCI supports devices that use either 5 volts or 3.3 volts. Intel proposed the PCI
standard in 1991 and it did not achieve popularity until the arrival of Windows 95.
This feature in Windows 95 is supported a feature known as PnP. The PCI-express
(PCIe) bus standard was published in 2003 and was initially adopted for video NOTES
cards. PCI Express is a more recent technology that is slowly replacing AGP. PCI
Express x16 slots can transfer data at 4GBs per second, which is about double
that of an AGP 8x slot. PCI Express slots come in PCIe x1, PCIe x2, PCIe x4,
PCIe x8, and PCIe x16. PCIe x16 slots are used for video cards. This is the latest
specification for graphics card technologies and the port for these cards is shown
in the bottom pane of the above figure. It is an implementation of the PCI computer
bus that uses existing PCI programming concepts, but bases it on a completely
different and much faster serial physical-layer communications protocol which
allows for considerable parallelism in the interface and therefore large data
throughput. Because of this parallelism the size of the PCIe socket grows between
the 1x, 4x, 8x and 16x versions of this interface. The PCIe link is built around a
bidirectional, serial (1-bit) and point-to-point connection known as a lane. This is
in sharp contrast to the PCI connection, which is a bus-based system where all
the devices share the same unidirectional, 32-bit, parallel bus. A connection between
any two PCIe devices is known as a link, and is built up from a collection of 1 or
more lanes. All devices must minimally support single-lane (x1) links. Devices
may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes.
PCIe is useful in applications other than graphics cards and hence became a new
backplane standard in personal computers. PCIe is an enhanced PCI bus
technology originally developed by IBM, HP and Compaq that is backward
compatible with existing PCI cards. PCI and 32-bit PCI-X slots are physically
the same, and PCI cards can plug into PCI-X slots. PCI-X cards will run in PCI
slots, but at the slower PCI rates. The 64-bit PCI-X slots are longer.
PCI-X Slots: The two long green slots on Gigabyte motherboard are 64-
bit PCI-X slots, which will accept all 32-bit and 64-bit PCI-X and PCI cards.
Table 7.3 shows the types of PCI buses and their speed.
Table 7.3 PCI Bus
Self-Instructional
Material 125
Bus PCI can connect more devices, for example up to five external components. Each
of the five connectors for an external component can be replaced with two fixed
devices on the motherboard. The PCI bridge chip regulates the speed of the PCI
bus independently of the speed of the CPU. This provides a higher degree of
NOTES reliability and ensures that PCI-hardware manufacturers know exactly what to
design for.
7.2.6 Accelerated Graphics Port
The Accelerated Graphics Port (AGP) expansion slot connects AGP video cards
to a motherboard. Example of AGP is AGP GeForce FX 5500. Video expansion
cards are also known as graphic expansion cards. AGP video cards are capable
of a higher data transfer rate than PCI video cards. Video cards simply plugs into
an AGP slot and connect to the monitor or other video display device to a computer.
The Digital Video Interface (DVI) out connector is also used to connect the digital
video display. Video cards with a TV output connection are capable of displaying
a computer’s video on a television. Video cards with a TV input connection are
able of displaying a television’s video on a computer. The AGP card and the monitor
are what determine the quality of a computer’s video display. AGP also supports
two optional faster modes, with throughputs of 533 MBps and 1.07 GBps
respectively. It also allows 3-D textures to be stored in main memory rather than
video memory. Each computer with AGP support will either have one AGP slot
or onboard AGP video. If the user wishes to have multiple video cards in the
computer they would have one AGP video card as well as one or more PCI video
cards. Figure 7.5 shows an AGP slot in which you can find that AGP slots and
cards come in different modes.
You must be careful to match the card and slot with the correct mode otherwise
it will not work. Some cards and slots are capable of running in more than one
mode. AGP 1x mode is the oldest. It transfers data at 266MBs per second.
AGP 2x mode transfers data at 533MBs per second. AGP 4x mode transfers
data at 1.07GBs per second. The latest AGP mode is AGP 8x. It transfers data
at 2.14GBs per second. GPU stands for Graphics Processing Unit. The video
card is in charge of controlling the video display. Much like the CPU’s relationship
with the motherboard, the brain of the video card is the GPU. It is responsible
for making the decisions for processing the video card’s graphical input and
output data.
Self-Instructional
126 Material
Bus
7.3 FSB
Front-side bus (FSB) is a bus interface which was often used in Intel-chip-based
computers during the 1990s and 2000s. The competing EV6 bus served the same NOTES
function for AMD CPUs. Both typically carry data between the central processing
unit (CPU) and a memory controller hub, known as the Northbridge. Depending
on the implementation, some computers may also have a back-side bus that
connects the CPU to the cache. This bus and the cache connected to it are faster
than accessing the system memory (or RAM) via the front-side bus. The speed of
the front side bus is often used as an important measure of the performance of a
computer.
7.4 USB
Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such as
monitors, keyboard, mouse, speaker, microphones, scanner, printer and modems.
It allows interfacing several devices to a single port in a daisy-chain. USB
provides power lines along with data-lines. USB cable contains four wires
collectively. Two of them are used to supply electrical power to peripherals,
eliminating bulky power supply. The other two wires are used to send data and
commands. USB uses three types of data transfer and they are isochronous or
real-time, interrupt driver and bulk-data transfer. USB is a set of connectivity
which is developed by Intel. It is easily connected with other peripherals to the
system unit. The configuration process takes place after plugging in the Integrated
Drive Electronics (IDE) cables to the socket. USB is to be considered as most
successful interconnection technology in the world of system unit. It can easily
migrate to the mobile gadgets and other consumer electronics too. It avoids
special types of interface cards and is easily movable to the laptop. In the year
of 1995, the USB was released. It operates at the speed of 480 Mbps and is
portable. The various types of portable devices, such as handhelds, digital
cameras, mobiles are connected to the system unit. For example, the images
and pictures, music files, multimedia files are transferred from digital camera to
a printer with the help of USB or wireless USB. The wired technology is enabled
to the mobile lifestyle. It connects the power telephonic conversation and
videoconferencing technique. In USB, four-wire cable is interfaced. Two of the
wires are set for differential mode. The function of this mode is to transit and
receive the information/data. The rest two wires are set for power supply. The
source of power comes from the host. Hub is self-powered. Two different
connectors are used with USB cable in which one end connector is attached for
upstream communication, whereas other end connector is used for downstream
communication. The USB cable length is available as 5 meters.
Self-Instructional
Material 127
Bus Types of Communication Transfer Modes in USB
Some of the communication transfer modes available for USB are as follows:
Control Mode: Host uses this mode in which data is transferred in both
NOTES directions to send and transfer the small amount of data.
Interrupt Mode: This mode is hosted by querying devices in which host is
used to transfer the data.
Bulk Mode: The bulk mode is used to get the features of data accuracy,
disk drive storage.
Isochronous Mode: This mode guarantees the timing of data delivery, for
example USB audio speakers.
As CPU design evolved into using faster local buses and slower peripheral
uses, Intel adopted the dual independent bus (DIB) terminology. The
external front-side bus to the main system memory, and the internal back-side
bus between one or more CPUs and the CPU caches. It was introduced in
the Pentium Pro and Pentium II products in the mid to late 1990s. The primary
bus is used for communicating data between the CPU and main memory and
input and output devices is called the front-side bus. The back-side bus accesses
the level 2 cache.
1. Expansion slots are located on the motherboard on the back of the computer.
It arranges the ports on the cards for slot distribution. There are several
types of expansion slots, including AGP, PCIe also known as PCI express,
PCI and ISA.
2. Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such
as monitors, keyboard, mouse, speaker, microphones, scanner, printer and
modems.
Self-Instructional
128 Material
3. The Accelerated Graphics Port (AGP) expansion slot connects AGP video Bus
NOTES
7.7 SUMMARY
Expansion slots are located on the motherboard on the back of the computer.
It arranges the ports on the cards for slot distribution. There are several
types of expansion slots, including AGP, PCIe also known as PCIexpress,
PCI and ISA.
ISA expansion cards are designated to the appropriate IRQ or DMA
numbers through jumpers and DIP switches. The ISA architecture also
separated the bus clock from the CPU clock to allow the slower data bus
to operate at its own speeds.
Extended Industry Standard Architecture (EISA) is similar to Micro-Channel
Architecture (MCA) bus both in terms of technology and marketing strategy
The MCA bus is designed to work with peripheral boards that transfer
data in 8-bit, 16-bit or 32-bit words.
VESA refers to a standards group that is formed in the late eighties to
address video related issues in personal computers.
Peripheral Component Interconnect (PCI) standard specifies a computer
bus for attaching peripheral devices to a computer motherboard.
The Accelerated Graphics Port (AGP) expansion slot connects AGP video
cards to a motherboard.
Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such
as monitors, keyboard, mouse, speaker, microphones, scanner, printer and
modems.
Self-Instructional
Material 129
Bus
7.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Self-Instructional
130 Material
Storage Devices
8.0 INTRODUCTION
In this unit, you will learn about the various types of storage devices. Storage
devices which help in backup storage are called auxiliary memory. RAM is a
volatile memory and, thus, a permanent storage medium is required in a computer
system. Auxiliary memory devices are used in a computer system for permanent
storage of information and hence these are the devices that provide backup
storage.
8.1 OBJECTIVES
Storage devices which help in backup storage are called auxiliary memory. RASM
is a volatile memory and, thus, a permanent storage medium is required in a
computer system. Auxiliary memory devices are used in a computer system for
permanent storage of information and hence these are the devices that provide
backup storage. They are used for storing system programs, large data files and
other backup information. The auxiliary memory has a large storage capacity
and is relatively inexpensive, but has low access speed as compared to the main
memory. The most common auxiliary memory devices used in computer systems
Self-Instructional
Material 131
Storage Devices are magnetic disks and tapes. Now, optical disks are also used as auxiliary
memory.
Magnetic Disk
NOTES Magnetic disks are circular metal plates coated with a magnetized material on
both sides. Several disks are stacked on a spindle one below the other with read/
write heads to make a disk pack. The disk drive consists of a motor and all the
disks rotate together at very high speed. Information is stored on the surface of a
disk along a concentric set of rings called tracks. These tracks are divided into
sections called sectors. A cylinder is a pair of corresponding tracks in every surface
of a disk pack. Disk drives are connected to a disk controller.
Thus, if a disk pack has n plates, there will be 2n surfaces; hence, the
number of tracks per cylinder is 2n. The minimum quantity of information which
can be stored, is a sector. If the number of bytes to be stored in a sector is less
than the capacity of the sector, the rest of the sector is padded with the last type
recorded. Figure 8.1 shows a magnetic disk memory.
Spindle
Surface 1
Surface 2
Read/Write
Head
Surface 3
Surface 4
Cylinder
Surface 2n
Fig. 8.1 Magnetic Disk
Self-Instructional
132 Material
Storage Devices
Sector
Tracks
NOTES
Read/Write
head
Let s bytes be stored per sector; p sectors exist per track, t tracks per surface
and m surfaces. Then, the capacity of the disk will be defined as:
Capacity = m × t × p × s bytes
If d is the diameter of the disk, the density of recording would be:
( p s)
Density = = bytes/inch
( d )
A set of disk drives is linked to a disk controller; the latter agrees to the
instructions and keeps ready the read/write heads for reading or writing. When
the read/write instruction is accepted by the disk controller, the controller first
positions the arm so that the read/write head reaches the appropriate cylinder. An
appropriate seek time (Ts) is the time needed to reach the proper cylinder. The
maximum Ts is the time needed by the head to reach the innermost cylinder from
the outermost cylinder or vice versa. The minimum Ts will be 0 if the head is
already positioned on the appropriate cylinder. Once the head is positioned on the
cylinder, there is further delay because the read/write head has to be positioned
on the appropriate sector. This is rotational delay, also known as latency time
(Tl). The average rotational delay equals half the time taken by the disk to complete
one rotation.
Floppy Disk
A floppy disk, also known as a diskette, is a very convenient bulk storage device
and can be taken out of the computer. It is of 5.25" or 3.5" size, the latter size
being more common. It is contained in a rigid plastic case. The read/write heads
of the disk drive can write or read information from both sides of the disk. The
storage of data is in magnetic form, similar to that of the hard disk. The 3.5" floppy
disk has a capacity of storage up to 1.44 Mbytes. It has a hole in the centre for
mounting it on the drive. Data on the floppy disk is organized during the formatting
process. The disk is also organized into sectors and tracks. The 3.5" high density
disk has 80 concentric circles called tracks and each track is divided into 18
Self-Instructional
Material 133
Storage Devices sectors. Tracks and circles exist on both sides of the disk. Each sector can hold
512 bytes of data plus other information like address, etc. It is a cheap read/write
bulk storage device.
NOTES
CD is an optical storage medium that reads the recorded data by the arrangement
of optical beams. For example, the process of storing audio data (having large
amount of data) in digital formats is known as audio encoding because one second
of audio information can store one million bits of data. Therefore, CD is quite
capable to store one millions of data and it takes area as tiny as pinhead. A CD is
4.75" diameter and made up of polycarbonate plastic disc. It can contain
approximately 74 minutes of audio information. The information is encoded into
the CD in terms of lands and pits and is represented by binary highs and lows that
are read as laser ‘reads’. The future of this disk is that it would be common
medium of exchanging the data for audio, video and other applications. Figure 8.4
shows the structure of compact disk, which is generally connected with circuit
board.
Circuit board
Self-Instructional
Material 135
Storage Devices Compact Disk Interactive
Compact Disk Interactive (CD-I) is the name of the interactive multimedia CD
player developed by Royal Philips Electronics N.V. It is mainly useful for creating
NOTES movies, game videos. A CD-I movie disk is also known as video CD holds
approximately 70 minutes Video Home System (VHS) quality video. In 1990,
Sony company launched a portable CD-I device with the 4" LCD monitor. Figure
8.5 shows compact disk inter-active, which is designed for real-time animation,
video and sound.
CD-ROM
This is an optical medium of data storage. The current maximum capacity of a
CD-ROM is 900MB with a maximum read/write access speed of 52X, (which
means 10,350 Rotations Per Minute (RPM) and transfer rate of 7.62 Mega-
Bytes Per Second (MBPS). The data is written with the help of a red infrared
laser beam from an optical lens and the same laser of lower intensity is used to
read data from the CD-ROM.
CD-Recordable
Write Once Read Many (WORM) storage has been working around since
1980s and is considered as a type of optical drive that can be written to and
read from. When data is written to a WORM drive, physical marks are made
on the media surface by a low-powered laser and since these marks are
permanent. They can not be erased, hence write once. Compact Disk-
Recordable (CD-R) media manufacturers use media longevity to define tests
and mathematical modeling techniques. The colour of the CD-R disc is related
to the colour of the specific dye that was used in the recording layer. This base
dye colour is modified when the reflective coating is recognized by either in
gold colour or silver colour.
Self-Instructional
136 Material
Storage Devices
NOTES
Figure 8.6 shows the protective and recording layer in CD-R. Pre-groove section
lies between recording layer and polycarbonate disc substrate. The CD-R has a
spiral track, which is preformed during manufacture, onto which data is written
during the recording process. This ensures that the recorder follows the same
spiral pattern as a conventional CD. It has also the same width of 0.6 microns and
pitch of 1.6 microns as a conventional disc. Discs are written from the inside of the
disc outward. The spiral track makes 22,188 revolutions around the CD, with
roughly 600 track revolutions per millimetre. CD-R writes data to a disc by using
its laser to physically burn pits into the organic dye. The CD-R is not strictly
WORM. By mid-1998, drives were capable of writing at quad-speed and reading
at twelve-speed, which is denoted as 4X/12X and were bundled with much
improved CD mastering software. The CD-R format has not been free of
compatibility issues. The surface of a CD-R is made to exactly match the 780nm
laser of an ordinary CD-ROM drive. However, CD-R’s real disadvantage is that
the writing process is permanent. The media can not be erased and again written
to CD-R.
CD-Rewritable
A CD-Rewritable (CD-RW) disk looks like CD-ROM and hence, distinguishable
from CD-R discs by their metallic gray color. It acts as CD-ROM in the time of
reading data. It also allows data recording for thousands of times. Recording on a
CD-RW is accomplished by local melting the recording material, which is then
being cooled quickly enough to quench in its amorphous phase. The cooling rate
is apparently a strong function of thermal properties of the layer and surrounding
layers, in particularly and the thermal diffusivity is calculated by Equation 8.1:
k
Thermal diffusivity, (8.1)
c p
Where,
k = The thermal conductivity (W/(m·K)).
= The density (kg/m³). Self-Instructional
Material 137
Storage Devices
c p = The specific heat capacity (J/(kg·K)).
c p = The volumetric heat capacity (J/(m³·K)).
In heat transfer analysis, thermal diffusivity is usually denoted by á. It is
NOTES the thermal conductivity divided by density and specific heat capacity at constant
pressure and has the SI unit of m²/s.
A process of annealing is used, when material is heated slightly below its
melting temperature. Fast data erasure could be achieved if the annealing rate is
very high and the temperature is slightly below its melting temperature. The following
basic requirements for an effective erasable phase-change material:
This CD-RW has different refractive index for crystalline and amorphous
phases for maintaining optical contrast,
Low melting point (low laser power) and moderate thermal conductivity
fast cooling and quenching must be with this CD-RW, and
Rapid annealing below melting temperature single-pass erasure process is
passed with CD-RW.
The structure of the CD-RW disk is similar to CD-R. It has the similar polycarbonate
substrate layer, protective layer and reflective metal layer. It has two dielectric
layers and a layer of phase-changing metal alloy. The dielectric layers prevent
overheating of the phase-changing layer during data recording process. The data
marks called pits are formed inside the light-adsorbing phase-changing film and
have different optical properties and different light reflectance. A typical structure
of the CD-RW disk and the data reading process is shown in the Figure 8.7.
To simplify the head positioning mechanism on a blank CD-RW, the laser beam of
the servo mechanism can follow this groove during both data reading and
writing. The CD-RW drive is different from the regular CD-ROM drive since its
laser can operate on the different power levels. The highest level causes phase
Self-Instructional
138 Material
transitions in the recording material and is used for data recording. The medium Storage Devices
level is used for annealing or erasuring. And, the lowest level of laser power is
used for data reading that scans the pits and lands without damaging the disk
surface. CD-RW uses Direct Over-Write (DOW) method when the new data are
just written on top of the old data. The design of CD-RW itself makes them a NOTES
perfect-writable storage, which is inexpensive and mobile. On the other hand, the
distant future of CD-RW technology is not fair enough since new technology DVD-
RAM gains momentum in the market.
Concepts of Latest Storage Devices
Various storage devices, such as input storage devices and output storage devices
are used in computer peripherals. The input storage devices allow information on
a computer to be retrieved anytime. Depending on the computer manufacturer,
different internal storage devices are made with computers. Magnetic disks use a
read-write head that give direct access to storage and the information can be
skipped to get the correct data. Redundant Array of Independent/Inexpensive
Disks (RAID) uses a stripping method where data is stored on individual physical
disks and information lost is retrieved by the individual disks. Magneto-optical
disks use a laser beam to record information. Magnetic tape can be used on a
computer internally or externally. Information from a magnetic tape is saved
sequentially, so data is lost while the time of accessing certain files or records. The
external storage output external hardware devices are used to save information
from a computer. Optical disks use laser beams to record information on a CD or
DVD. For example, Iomega zip drives compress data onto a disk. Virtual tape
stores information on a tape cartridge. PCMCIA (Personal Computer Memory
Card International Association) cards are used in digital camera or cellular phones.
These cards can also be used to save or upload files to a computer. Songs and
music files are also be stored on an iPod or MP3 music device. The latest storage
devices, such as DVD-RW, zip disk, Blu-Ray disk, HVD, USB, external HDD,
pen drives and memory sticks, iPod, MPEG audio layer III, Set-Top-Box, etc.
are frequently used in networking era as follows:
Digital Versatile Disk-Rewriteable
Digital Versatile Disk-Rewritable (DVD-RW) is like a DVD-R but can be erased
and written to again. It can be erased so that new data can be added. DVD-RWs
can hold 4.7GB of data and do not come in double-layered or double-sided
versions like DVD-R does. Because of their large capacity and ability to be used
multiple times, DVD-RW discs are a great solution for frequent backups. To record
data onto a DVD-RW disc, you will need a DVD burner that supports the DVD-
RW format. DVD-RW disc brings increased functionality to the DVD-R format.
These discs are rewritable up to 1,000 times. With this built-in versatility, you can
store a combination of both digital video and digital audio files on the same disc.
Some of the examples of rewritable media are 17344 DVD-RW 4×1pk w/
Self-Instructional
Material 139
Storage Devices Standard Jewel Cases, 17345 DVD-RW 4x5pk w/Standard Jewel Cases and
17346 DVD-RW 4x25pk Spindle. The features of DVD-RW are as follows:
It has 4.7GB capacity that is equal to 2 hours of video.
NOTES It has high-storage density, which is compatible with existing DVD video
players and DVD-ROM drives.
It holds seven times more data than a full size CD-R.
It has outstanding picture quality and long archival life.
It is capable in 2x and 4x recording speeds.
It transfers data easily and is useful for video recording or authoring.
Figure 8.8 shows the DVD-RW, in which data can be erased and recorded over
numerous times without damaging the medium.
DVD-RW and DVD+RW formats are known as re-writable formats. The sister
format of DVD-RW is known as DVD-R, which is essentially a record-once
version of DVD-RW. DVD+RW’s sister format is called DVD+R. DVD-RW
discs can be read with virtually any PC DVD-ROM drive and with most of the
regular, stand-alone DVD players.
ZIP Drives
These are similar to disk drives but with thicker magnetic disks and a larger number
of heads in the drive to read/write. The Zip drive was introduced mainly to overcome
the limitations of the floppy drive and replace it with a higher capacity and faster
medium. They are better than floppy disks but still slow in performance and with a
high cost-to-storage ratio. The disk size ranges from 100MB to 750MB. Zip
drives were popular for several years until the introduction of CD-ROMs and
CD-Writers, which have now come to be widely accepted due to their cost,
convenience and speed.
Blu-Ray Disc
Another high-density optical storage media format is gaining popularity these days.
It is mainly used for high-definition video and storing data. The storage capacity of
a dual layer Blue-ray disc is 50 GB, almost equal to storing data in six double-dual
layer DVD or more than 10 single-layer DVD. With high storage capacity, Blu-
Self-Instructional
140 Material
ray discs can hold and play back large quantities of high-definition video and Storage Devices
Self-Instructional
Material 141
Storage Devices Unlike DVDs and CDs, which started with read-only formats and only later
added recordable and re-writable formats, Blu-Ray is initially designed in several
different formats:
BD-ROM (Read-Only): For pre-recorded content.
NOTES
BD-R (Recordable): For PC data storage.
BD-RW (Rewritable): For PC data storage.
BD-RE (Rewritable): For HDTV recording.
A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an
electromechanical data storage device that uses magnetic storage to store and
retrieve digital information using one or more rigid rapidly rotating disks (platters)
coated with magnetic material. The platters are paired with magnetic heads, usually
arranged on a moving actuator arm, which read and write data to the platter surfaces.
Data is accessed in a random-access manner, meaning that individual blocks of data
can be stored or retrieved in any order and not only sequentially. Hard disks are
rigid platters, composed of a substrate and a magnetic medium. The substrate – the
platter's base material – must be non-magnetic and capable of being machined to a
smooth finish. It is made either of aluminium alloy or a mixture of glass and ceramic.
1. The Zip drive is introduced mainly to overcome the limitations of the floppy
drive and replace it with a higher capacity and faster medium. Zip drive is
thicker magnetic disks and a larger number of heads in the drive to read/
write.
2. Magnetic disks are circular metal plates coated with a magnetized material
on both sides. Several disks are stacked on a spindle one below the other
with read/ write heads to make a disk pack.
3. Compact Disk Interactive (CD-I) is the name of the interactive multimedia
CD player developed by Royal Philips Electronics N.V. It is mainly useful
for creating movies, game videos.
8.7 SUMMARY
NOTES
8.8 KEY WORDS
Hard Disk: A hard disk drive (HDD), hard disk, hard drive, or fixed disk is
an electromechanical data storage device that uses magnetic storage to store
and retrieve digital information using one or more rigid rapidly rotating disks
(platters) coated with magnetic material.
Optical Disk: optical disk storage technology has the benefit of high-volume
economical storage coupled with slower times than magnetic disk storage.
Self-Instructional
144 Material
Input Output Devices, Wired
BLOCK - III and Wireless Connectivity
9.0 INTRODUCTION
In this unit, you will learn about the I/O devices, wired and wireless devices.
Computers have an input/output subsystem which provides an efficient mode of
communication between the central system and the outside world. The I/O devices
refers to input devices, such as keyboard, point and draw devices, such as mouse,
touch pads, light pens, trackball, joystick, etc., and various types of output devices,
such as multimedia projectors, printers, etc., which collectively provide a means
of communication between the computer and the outside world are known as
peripheral devices. This is because they surround the CPU and the memory of a
computer system. You will also study about various external as well as internal I/O
devices, such as keyboard, mouse, scanner, printer, monitor, universal serial bus,
projectors, etc. A multimedia projector is an output device which is used to project
information from the computer onto a large screen so that it can be viewed by a
large group of people.
9.1 OBJECTIVES
Self-Instructional
146 Material
form of a document containing the student’s name, roll number and marks scored Input Output Devices, Wired
and Wireless Connectivity
in each subject. This data must first be stored in the computer’s memory after
converting it into machine readable form. The data will then be processed (average
marks calculated) and sent from the memory to the output unit which will present
the data in a form that can be read by users. NOTES
The I/O devices that provide a means of communication between the
computer and the outside world are known as peripheral devices. This is because
they surround the CPU and the memory of a computer system. While input devices
are used to enter data from the outside world into the primary storage, output
devices are used to provide the processed results from primary storage to users.
Input Devices
Input devices are used to transfer user data and instructions to the computer. The
most commonly used input devices can be classified into the following five categories:
Keyboard devices (general and special purpose, key to tape, key to disk,
key to diskette).
Point and draw devices (touch screen, touch pads, light pen, trackball,
joystick).
Scanning devices (mouse, optical mark recognition, magnetic ink character
recognition, optical bar code reader, digitizer, electronic card reader).
Voice recognition devices.
Vision input devices (webcam, video camera).
Keyboard Devices
Keyboard devices allow input into the computer system by pressing a set of keys
mounted on a board, connected to the computer system. Keyboard devices are
typically classified as general purpose keyboards and special purpose keyboards.
General Purpose Keyboard
The most familiar means of entering information into a computer is through a
typewriter like keyboard that allows a person to enter alphanumeric information
directly.
The most popular keyboard used today is the 101 key with a traditional QWERTY
layout, with an alphanumeric keypad, 12 function keys, a variety of special function
keys, numeric keypad, and dedicated cursor control keys. It is so called because
the arrangement of its alphanumeric keys in the upper left row.
Alphanumeric Keypad: This contains keys for the English alphabets, 0 to
9 numbers, special characters like * + – / [ ], etc.
12 Function Keys: These are keys labelled F1, F2 ... F12 and are a set of
user programmable function keys. The actual function assigned to a function
Self-Instructional
Material 147
Input Output Devices, Wired key differs from one software package to another. These keys are also
and Wireless Connectivity
called soft keys since their functionality can be defined by the software.
Special Function Keys: Special functions are assigned to each of these
keys. The enter key, for example is used to send the keyed in data into the
NOTES
memory. Other special keys include:
o Shift (used to enter capital letters or special characters defined above
the number keys).
o Spacebar (used to enter a space at the cursor location).
o Ctrl (used in conjunction with other keys to provide added functionality
on the keyboard).
o Alt (like CTRL, used to expand the functionality of the keyboard).
o Tab (used to move the cursor to the next tab position defined).
o Backspace (used to move the cursor a position to the left and also
delete the character in that position).
o Caps Lock (to toggle between the capital letter lock feature – when
‘on’, it locks the keypad for capital letters input).
o Num Lock (to toggle the number lock feature – when ‘on’, it inputs
numbers when you press the numbers on the numeric keypad).
o Insert (used to toggle between the insert and overwrite mode during
data entry when ‘on’, entered text is inserted at the cursor location).
o Delete (used to delete the character at the cursor location).
o Home (used to move the cursor to the beginning of the work area which
could be the line, screen or document depending on the software being
used).
o End (used to move the cursor to the end of the work area).
o Page Up (used to display the previous page of the document being
currently viewed on screen).
o Page Down (used to view the next page of the document being currently
viewed on screen).
o Escape (usually used to negate the current command).
o Print Screen (used to print what is being currently displayed on the
screen).
Numeric Keypad: This consists of keys with numbers (0 to 9) and
mathematical operators (+ – * /) defined on them. It is usually located on
the right side of the keyboard and supports quick entry of numerical data.
Cursor Control Keys: They are defined by the arrow keys used to move
the cursor in the direction indicated by the arrow (top, down, left, right).
Another popular key arrangement, called Dvorak system, was designed
Self-Instructional
148 Material
for easy learning and use. It was designed with the most common consonants in Input Output Devices, Wired
and Wireless Connectivity
one part and all the vowels on the other part of the middle row of the keyboard.
This key arrangement made the users use alternate keystrokes back and forth
between both the hands. This keyboard was never been commonly used.
NOTES
Special Purpose Keyboard
These are standalone data entry systems used for computers deployed for specific
applications. These typically have special purpose keyboards to enable faster
data entry. A very typical example of such keyboards can be seen at the Automatic
Teller Machines (ATMs) where the keyboard is required for limited functionality
(support for some financial transactions) by the customers. Point Of Sale (POS)
terminals at fast food joints and Air/Railway reservation counters are some other
examples of special purpose keyboards. These keyboards are specifically designed
for special types of applications only.
Key to Tape, Key to Disk, Key to Diskette
These are individual standalone workstations used for data entry only. These
processor-based workstations normally have a keyboard and a small monitor.
The function of the processor is to check the accuracy of the data when it is being
entered.
The screen displays data as it is being entered. These facilities are very
useful and desirable during mass data entry and are therefore becoming very
popular in data processing centres.
Point and Draw Devices
The keyboard facilitates input of data in text form only. While working with display-
based packages, we usually point to a display area and select an option from the
screen [fundamentals of Graphical User Interface (GUI) applications]. For such
cases, the sheer user friendliness of input devices that can rapidly point to a particular
option displayed on screen and support its selection resulted in the advent of
various point and draw devices.
Mouse
A mouse is a small input device used to move the cursor on a computer screen to
give instructions to the computer and to run programs and applications It can be
used to select menu commands, move icons, size windows, start programs, close
windows, etc. Initially, the mouse was a widely used input device for the Apple
computer and was a regular device of the Apple Macintosh. Nowadays, the mouse
is the most important device in the functioning of a GUI of almost all computer
systems.
You can click a mouse button, i.e., press and release the left mouse button
to select an item. You can right click, i.e., press and release the right mouse button
to display a list of commands. You can double click, i.e., quickly press the left
mouse button twice without any time gap between the press of the buttons to open Self-Instructional
Material 149
Input Output Devices, Wired a program or a document. You can also drag and drop, i.e., place the cursor over
and Wireless Connectivity
an item on the screen and than press and hold down the left mouse button. Holding
down the button, move the cursor to where you want to place the item and then
release the button.
NOTES
Touch Screen
A touch screen is probably one of the simplest and most intuitive of all input devices
(refer Figure 9.3). It uses optical sensors in or near the computer screen that can
detect the touch of a finger on the screen. Once the user touches a particular
screen position sensors communicate the position to the computer. This is then
interpreted by the computer to understand the user’s choice for input. The most
common usage of touch screens is in information kiosks where users can receive
information at the touch of a screen. These devices are becoming increasingly
popular today.
Touch Pads
A touch pad is a touch sensitive input device which takes user input to control the
onscreen pointer and perform other functions similar to that of a mouse. Instead of
having an external peripheral device, such as a mouse, the touch pad enables the
user to interact with the device through the use of a single or multiple fingers being
dragged across relative positions on a sensitive pad. They are mostly found in
notebooks and laptops where convenience, portability and space are the prime
design concerns.
Light Pen
The light pen is a small input device used to select and display objects on a screen.
It functions with a light sensor and has a lens on the tip of a pen shaped device.
The light receptor is activated by pointing the light pen towards the display screen
and it then locates the position of the pen with the help of a scanning beam application
to directly draw on screen.
Trackball
The trackball is a pointing device that is much like an inverted mouse. It consists of
a ball inset in a small external box or adjacent to and in the same unit as the
keyboard of some portable computers.
It is more convenient and requires much less space than the mouse since
here the whole device is not moved (as in the case of a mouse). Trackball comes
in various shapes but supports the same functionality. Typical shapes used are a
ball, a square and a button (typically seen in laptops).
Joystick
The joystick is a vertical stick that moves the graphic cursor in the direction the
stick is moved. It consists of a spherical ball which moves within a socket and has
Self-Instructional
a stick mounted on it. The user moves the ball with the help of the stick that can be
150 Material
moved left or right, forward or backward, to move and position the cursor in the Input Output Devices, Wired
and Wireless Connectivity
desired location. Joysticks typically have a button on top that is used to select the
option pointed by the cursor.
Video games, training simulators and control panels of robots are some
NOTES
common uses of a joystick.
Digitizer
Digitizers are used to convert drawings or pictures and maps into a digital format
for storage into the computer. A digitizer consists of a digitizing or graphics tablet
which is a pressure sensitive tablet, and a pen with the same X and Y coordinates
as on the screen. Some digitizing tablets also use a crosshair device instead of a
pen. The movement of the pen or crosshair is reproduced simultaneously on the
display screen. When the pen is moved on the tablet, the cursor on the computer
screen moves simultaneously to the corresponding position on the screen (X and Y
coordinates). This allows the user to draw sketches directly or input existing
sketched drawings easily. Digitizers are commonly used by architects and engineers
as a tool for Computer Aided Designing (CAD).
Electronic Card Reader
Card readers are devices that also allow direct data input into a computer system.
The electronic card reader is connected to a computer system and reads the data
encoded on an electronic card and transfers it to the computer system for further
processing. Electronic cards are plastic cards with data encoded on them and
meant for a specific application. Typical examples of electronic cards are the plastic
cards issued by banks to their customers for use in ATMs. Electronic cards are
also used by many organizations for controlling access of various types of
employees to physically secured areas.
Depending on the manner in which the data is encoded, electronic cards
may be either magnetic strip cards or smart cards. Magnetic strip cards have a
magnetic strip on the back of the card. Data stored on magnetic strips cannot be
read with the naked eye, a useful way to maintain confidential data. Smart cards,
going a stage further, have a built-in microprocessor chip where data can be
permanently stored. They also possess some processing capability making them
suitable for a variety of applications. To gain access, for example, an employee
inserts a card or badge in the reader. This device reads and checks the authorization
code before permitting the individual to enter a secured area. Since smart cards
can hold more information as compared to magnetic strip cards, they are gaining
popularity.
Voice Recognition Devices
One of the most exciting areas of research involves the recognizing of an individual
human voice as the basis of input to the computer system. Eliminating the keying in
of data, basic commands can very easily be given, facilitating quick operation.
Voice recognition devices consist of a microphone attached to the computer system.
Self-Instructional
Material 151
Input Output Devices, Wired A user speaks into the microphone to input data. The spoken words are then
and Wireless Connectivity
converted into electrical signals (this is in the analog form). A digital to analog
converter then converts the analog form to digital form (0s and 1s) that can be
interpreted by the computer. The digitized version is then matched with the existing
NOTES pre-created dictionary to perform the necessary action. Voice recognition devices
have limited usage today because they have several problems. Not only do they
require the ability to recognize who is speaking but also what is being said (the
message). This difficulty arises primarily because people speak with different accents
and different tones and pitches. The computer requires a large vocabulary to be
able to interpret what is being said. Today’s voice recognition systems are therefore
successful in a limited domain. They are limited to accepting words and tasks
within a limited scope of operation and can handle only small quantities of data.
Most speech recognition systems are speaker dependent, i.e., they respond
to the unique speech of a particular individual, a feature not necessarily inconvenient
but limiting generalized applications. It therefore requires creating a database of
words for each person using the system.
Vision Input Devices
Vision input devices allow data input in the form of images. It usually consists of a
digital camera which focuses on the object whose picture is to be taken. The
camera creates the image of the object in digital format which can then be stored
within the computer.
As the speech recognition system digitizes the voice input, this system
similarly compares the digitized images to be interpreted to the pre-recorded
digitized images of the database of your computer system. Once it finds the right
match it sends it for further processing or pre-defined action.
Video input or capture is the recording and storing of full motion video
recordings on your computer system’s storage device. High end video accelerator
and capture cards are required to capture and view good quality video recordings
on your computer. File compression of the recorded video file is very important
for file storage, since the video files captured are of high quality and may take up
one gigabyte of disk space. The most popular standard used for compression is
the Motion Pictures Expert Group (MPEG). Webcams and video cameras are
most commonly used to input visual data.
Web Camera
A Web camera is a video capturing device attached to the computer system,
mostly using a USB port used for video conferencing, video security, as a control
input device and also in gaming.
Output Devices
An output device is an electromechanical device that accepts data from the
computer and translates it into a form that can be understood by the outside world.
Self-Instructional
152 Material
The processed data, stored in the memory of the computer, is sent to an output Input Output Devices, Wired
and Wireless Connectivity
unit which then transforms the internal representation of data into a form that can
be read by the users.
Normally, the output is produced on a display unit like a computer monitor
NOTES
or can be printed through a printer on paper. At times speech outputs and mechanical
outputs are also used for some specific applications.
Output produced on display units or speech output that cannot be touched
is referred to as softcopy output while output produced on paper or material that
can be touched is known as hardcopy output. A wide range of output devices are
available today and can be broadly classified into the following four types:
Display devices (monitors, multimedia projectors)
Speakers
Printers (dot matrix, inkjet, laser)
Plotters (flatbed, drum)
Display Devices
It is almost impossible to even think of using a computer system without a display
device. A display device is the most essential peripheral of a computer system.
Initially, alphanumeric display terminals were used that formed a 7×5 or 9×7 array
of dots to display text characters only. As a result of the increasing demand and
use of graphics and GUIs, graphic display units were introduced. These graphic
display units are based on series of dots known as pixels used to display images.
Every single dot displayed on the screen can be addressed uniquely and directly.
Display Screen Technologies
Owing to the fact that each dot can be addressed as a separate unit, it provides
greater flexibility for drawing pictures. Display screen technology may be of the
following categories:
Cathode Ray Tube: The CRT consist of an electron gun with an electron
beam controlled with electromagnetic fields and a phosphate coated glass display
screen structured into a grid of small dots known as pixels. The image is created
with the electron beam produced by the electron gun which is thrown on the
phosphor coat displayed by the electromagnetic field.
Liquid Crystal Display: Liquid Crystal Display (LCD) was first introduced
in the 1970s in digital clocks and watches, and is now widely being used in computer
display units. The Cathode Ray Tube (CRT) was replaced with the LCD making
it slimmer and more compact. But, the image quality and the image colour capability
got comparatively poorer.
The main advantage of LCD is its low energy consumption. It finds its most
common usage in portable devices where size and energy consumption are of
main importance.
Self-Instructional
Material 153
Input Output Devices, Wired Projection Display: Projection display technology is characterized by
and Wireless Connectivity
replacing the personal size screen with large screens upon which the images are
projected. It is attached to the computer system and the magnified display of the
computer system is projected on a large screen.
NOTES
Monitors: Monitors use a CRT to display information. It resembles a
television screen and is similar to it in other respects.
The monitor is typically associated with a keyboard for manual input of
characters. The screen displays information as it is keyed in enabling a visual
check of the input before it is transferred to the computer. It is also used to display
the output from the computer and hence serves as both an input and an output
device.
This is the most commonly used (I/O) device today and is also known as a
soft copy terminal. A printing device is usually required to provide a hard copy of
the output.
Multimedia Projectors
A multimedia projector is an output device which is used to project information
from the computer onto a large screen so that it can be viewed by a large group of
people. Prior to this, the standard mode of making presentations was to make
transparencies and project them using an overhead projector. This was a tedious
and time consuming activity since for every change in the subject matter a new
transparency had to be prepared. And of course, since electronic cut, copy and
paste was not possible this, meant additional work.
A multimedia projector can directly be plugged into a computer system and
the information projected on a large screen thereby making it possible to present
information to a large audience. The presenter can also use a pointer to emphasize
specific areas of interest in the presentation.
LCD flat screens connected to the computer systems for projecting the
LCD image using an overhead multimedia projector are widely used today.
Owing to its convenience and applicability, multimedia projectors are
increasingly becoming popular in seminars, presentations and classrooms.
Speakers
Used to produce music or speech from programs, a speaker port (a port is a
connector in your computer wherein you can connect an external device) allows
connection of a speaker to a computer. Speakers can be built into the computer
or can be attached separately.
Printers
Printers are used for creating paper output. There is a huge range of commercially
available printers today (estimated to be 1500 different types). These printers can
be classified into the following three categories based on:
Self-Instructional
154 Material
Printing Technology Input Output Devices, Wired
and Wireless Connectivity
Printers can be classified as impact or non-impact printers, based on the technology
they use for producing output. Impact printers work on the mechanism similar to a
manual typewriter where the printer head strikes on the paper and leaves the NOTES
impression through an inked ribbon. Dot matrix printers and character printers fall
under this category. Non-impact printers use chemicals, inks, toners, heat or electric
signals to print on the paper and they do not physically touch the paper while printing.
Printing Speed
This refers to the number of characters printed in a unit of time. Based on speed,
these may be classified as character printer (prints one character at a time), line
printers (prints one line at a time), and page printers (print the entire page at a time).
Printer speeds are therefore measured in terms of characters per second (cps) for
a character printer, lines per minute (lpm) for a line printer, and pages per minute
(ppm) for a page printer.
Printing Quality
It is determined by the resolution of printing and is characterized by the number of
dots that can be printed per linear inch, horizontally or vertically. It is measured in
terms of dots per inch or dpi. Printers can be classified as near letter quality or
NLQ, letter quality or LQ, near typeset quality or NTQ and typeset quality or TQ,
based on their printing quality. NLQ printers have resolutions of about 300 dpi, LQ
of about 600 dpi, NTQ of about 1200 dpi and TQ of about 2000 dpi. NLQ and LQ
printers are used for ordinary printing in day to day activities while NTQ and TQ
printers are used to produce top quality printing typically required in the publishing
industry.
Types of Printers
Dot Matrix: Dot matrix printers are the most widely used impact printers in personal
computing. These printers use a print head consisting of a series of small metal pins
that strike on a paper through an inked ribbon leaving an impression on the paper
through the ink transferred. Characters thus produced are in a matrix format. The
shape of each character, i.e., the dot pattern, is obtained from information held
electronically.
The speed, versatility and ruggedness, combined with low cost, tend to
make such printers particularly attractive in the personal computer market. Typical
printing speeds in case of dot matrix printers range between 40–1000 characters
per second (cps). In spite of all theses features in dot matrix printer technology,
the low print quality gives it a major disadvantage.
Inkjet: Inkjet printers are based on the use of a series of nozzles for propelling
droplets of printing ink directly on almost any size of paper. They, therefore, fall
under the category of non-impact printers. The print head of an inkjet printer consists
of a number of tiny nozzles that can be selectively heated up in a few microseconds
Self-Instructional
Material 155
Input Output Devices, Wired by an Integrated Circuit (IC) register. When this happens, the ink near it vaporizes
and Wireless Connectivity
and is ejected through the nozzle to make a dot on the paper placed in front of the
print head. The character is printed by selectively heating the appropriate set of
nozzles as the print head moves horizontally.
NOTES Laser: Laser printers work on the same printing technology as photocopiers using
static electricity and heat to print with a high quality powder substance known as
toner.
Laser printers are capable of converting computer output into print, page
by page. Since characters are formed by very tiny ink particles they can produce
very high quality images (text and graphics). They generally offer a wide variety of
character fonts, and are silent and fast in use. Laser printers are faster in printing
speed than other printers. Laser printers can print from 10 pages to 100 pages per
minute, depending upon the model. Laser is high quality, high speed, high volume
and non-impact technology that works on almost any kind of paper. Even though
this technology is more expensive than inkjet printers it is preferred because of its
unmatched features, such as high quality, high-speed printing and noiseless and
easy to use operations.
Plotters
Plotters are used to make line illustrations on paper. They are capable of producing
charts, drawings, graphics, maps, and so on. A plotter is much like a printer but is
designed to print graphs instead of alphanumeric characters. Based on the
technology used, there are mainly two type of plotters: pen plotters or electrostatic
plotters. Pen plotters, also known as flatbed plotters, draw images with multicolored
pens attached to a mechanical arm. Electrostatic plotters, also known as drum
plotters, work on the same technology as laser printers.
Flatbed plotters and drum are the most commonly used plotters.
Flatbed Plotters
Flatbed plotters have a flat base like a drawing board on which the paper is laid.
One or more arms, each of them carrying an ink pen, moves across the paper to
draw. The arm movement is controlled by a microprocessor (chip). The arm can
move in two directions, one parallel to the plotter and the other perpendicular to it
(called the x and y directions). With this kind of movement, it can move very
precisely to any point on the paper placed below.
The computer sends the commands to the plotter which are translated into
x and y movements. The arm moves in very small steps to produce continuous
and smooth graphics.
The size of the plot in a flatbed plotter is limited only by the size of the
plotter’s bed.
The advantage of flatbed plotters is that the user can easily control the
graphics. He can manually pick up the arm anytime during the production of graphics
Self-Instructional
156 Material
and place it on any position on the paper to alter the position of graphics to his Input Output Devices, Wired
and Wireless Connectivity
choice.
The disadvantage here is that flatbed plotters occupy a large amount of
space. NOTES
Drum Plotters
Drum plotters move the paper with the help of a drum revolver during printing.
The arm carrying a pen moves only in one direction, perpendicular to the direction
of the motion of the paper. It means that while printing, the plotter pens print on
one axis of the paper and the cylindrical drum moves the paper on the other
axis. With this printing technology, the plotter has an advantage to print on
unlimited length of paper but on a limited width. Drum plotters are compact and
lightweight compared to flatbed plotters. This is one of the advantages of such
plotters.
The disadvantage of drum plotters, however, is that the user cannot freely
control the graphics when they are being created. Plotters are more expensive
when compared to printers. Typical application areas for plotters include Computer
Aided Engineering (CAE) applications, such as Computer Aided Design (CAD),
Computer Aided Manufacturing (CAM) and architectural drawing and map
drawing.
1. Input devices are used to transfer user data and instructions to the computer.
2. There are three types of printer.
Dot matrix
Ink Jet
Lazer
3. A Web camera is a video capturing device attached to the computer system,
mostly using a USB port used for video conferencing, video security, as a
control input device
Self-Instructional
Material 157
Input Output Devices, Wired
and Wireless Connectivity 9.5 SUMMARY
Keyboard devices allow input into the computer system by pressing a set
NOTES of keys mounted on a board, connected to the computer system. Keyboard
devices are typically classified as general purpose keyboards and special
purpose keyboards.
A mouse is a small input device used to move the cursor on a computer
screen to give instructions to the computer and to run programs and
applications.
The light pen is a small input device used to select and display objects on a
screen. It functions with a light sensor and has a lens on the tip of a pen
shaped device..
An output device is an electromechanical device that accepts data from the
computer and translates it into a form that can be understood by the outside
world. The processed data, stored in the memory of the computer, is sent
to an output unit which then transforms the internal representation of data
into a form that can be read by the users.
Printers are used for creating paper output.
Plotters are used to make line illustrations on paper. They are capable of
producing charts, drawings, graphics, maps
Flatbed plotters have a flat base like a drawing board on which the paper is
laid. One or more arms, each of them carrying an ink pen.
Input devices: These devices are used to transfer user data and instructions
to the computer.
Digitizers: They are used to convert drawings or pictures and maps into a
digital format for storage into the computer.
Self-Instructional
158 Material
Long Answer Questions Input Output Devices, Wired
and Wireless Connectivity
1. Explain the different types of input devices.
2. Explain the different types of output devices.
NOTES
9.8 FURTHER READINGS
Self-Instructional
Material 159
Computer Software
10.0 INTRODUCTION
10.1 OBJECTIVES
are predefined. Computer software can function from only a few instructions to
millions of instructions; for example, a word processor or a Web browser. Figure
10.1 shows how software interacts between user and computer system.
NOTES
Users
Application Software
Hardware System
designed for personal use on a daily basis. The past few years have
seen a marked increase in the personal computer software market from
normal text editor to word processor and from simple paintbrush to
advance image editing software. This software is used mostly in almost NOTES
every field, whether it is database management system, financial
accounting package or a multimedia based software. It has emerged as
a versatile tool for daily life applications.
Software can also be classified in terms of the relationship between software users
or software purchasers and software development.
Commercial Off-The-Shelf (COTS): This comprises the software without
any committed user before it is put up for sale. The software users have less
or no contact with the vendor during development. It is sold through retail
stores or distributed electronically. This software includes commonly used
programs, such as word processors, spreadsheets, games, income tax
programs, as well as software development tools, such as software testing
tools and object modelling tools.
Customized or Bespoke: This software is designed for a specific user,
who is bound by some kind of formal contract. Software developed for
an aircraft, for example, is usually done for a particular aircraft making
company. They are not purchased ‘off-the-shelf’ like any word processing
software.
Customized COTS: In this classification, a user can enter into a contract
with the software vendor to develop a COTS product for a special purpose,
that is, software can be customized according to the needs of the user.
Another growing trend is the development of COTS software components—
the components that are purchased and used to develop new applications.
The COTS software component vendors are essentially parts stores which
are classified according to their application types. These types are listed as
follows:
o Stand-Alone Software: A software that resides on a single computer
and does not interact with any other software installed in a different
computer.
o Embedded Software: A software that pertains to the part of unique
application involving hardware like automobile controller.
o Real-Time Software: In this type of software the Operations are
executed within very short time limits, often microseconds, e.g., radar
software in air traffic control system.
o Network Software: In this type of software, software and its
components interact across a network.
Self-Instructional
Material 163
Computer Software 10.2.1 System Software
System software constitutes all the programs, languages and documentation
provided by the manufacturer in the computer. These programs provide the user
NOTES with an access to the system so that he can communicate with the computer and
write or develop his own programs. The software makes the machine user-friendly
and makes an efficient use of the resources of the hardware. Systems software are
permanent programs on a system and reduce the burden of the programmer as
well as aid in maximum resource utilization. MS DOS (Microsoft Disk Operating
System) was one of the most widely used systems software for IBM compatible
microcomputers. Windows and its different versions are popular examples of
systems software. Systems software are installed permanently on a computer system
used on a daily basis.
Operating System
An Operating System (OS) is the main control program for handling all other
programs in a computer. The other programs, usually known as ‘application
programs’, use the services provided by the OS through a well-defined
Application Program Interface (API). Every computer necessarily requires some
type of operating system that instructs the computer about operations and use
other programs installed in the computer. The role of an OS in a computer is
similar to the role of the manager in an office for the overall management of the
college.
10.2.2 Application Software
Users install specific software programs based on their requirements; for instance,
accounting software (like Tally) used in business organizations and designing
software used by architects. All programs, languages and utility programs constitute
software. With the help of these programs, users can design their own software
based on individual preferences. Software programs aid in achieving efficient
application of computer hardware and other resources.
1. Licensed Software
Although there is a large availability of open source or free software online, not all
software available in the market is free for use. Some software falls under the
category of Commercial Off-The-Shelf (COTS). COTS is a term used for
software and hardware technology which is available to the general public for
sale, license or lease. In other words, to use COTS software, you must pay its
developer in one way or another.
Most of the application software available in the market need a software
license for use.
Software is licensed in different categories. Some of these licenses are based
on the number of unique users of the software while other licenses are based on
Self-Instructional
the number of computers on which the software can be installed.A specific distinction
164 Material
between licenses would be an Organizational Software License, which grants an Computer Software
organization the right to distribute the software or application to a certain number
of users or computers within the organization, and a Personal Software License
which allows the purchaser of the application to use the software on his or her
computer only. NOTES
2. Free Domain Software
To understand this, let us distinguish between the commonly used terms Freeware
and Free Domain software. The term ‘freeware’ has no clear accepted definition,
but is commonly used for packages that permit redistribution but not modification.
This means that their source code is not available. Free domain software is software
that comes with permission for anyone to use, copy, and distribute, either verbatim
or with modifications, either gratis or for a fee. In particular, this means that the
source code must be available. Free domain software can be freely used, modified,
and redistributed but with one restriction: the redistributed software must be
distributed with the original terms of free use, modification and distribution. This is
known as ‘copyleft’. Free software is a matter of freedom, not price. Free software
may be packaged and distributed for a fee. The ‘Free’ here refers to the ability of
reusing it — modified or unmodified, as a part of another software package. The
concept of free software is the brainchild of Richard Stallman, head of the GNU
project. The best known example of free software is Linux, an operating system
that is proposed as an alternative to Windows or other proprietary operating
systems. Debian is an example of a distributor of a Linux package.
Free software should, therefore, not be confused with freeware, which is a
term used for describing software that can be freely downloaded and used but
which may contain restrictions for modification and reuse.
A few types of application programs that are widely accepted these days,
are:
1. Word Processing
A word processor is an application program used for the production of any type
of printable text document including composition, editing, formatting and printing.
It takes the advantage of a Graphical User Interface (GUI) to present data in a
required format. It can produce any arbitrary combination of images, graphics
and text. Microsoft Word is the most widely used word processing system.
Microsoft Word can be used for the simplest to the most complex word
processing applications. Using Word, you can write letters and reports, prepare
bills and invoices, prepare office stationery, such as letterheads, envelopes and
forms, design brochures, pamphlets, newsletters and magazines, etc.
2. Spreadsheet
Excel is ideal for a task that needs a number of lists, tables, financial calculations,
analysis and graphs. Excel is good for organizing different kinds of data, however
Self-Instructional
Material 165
Computer Software it is numerical data that is best suited. Thus, Excel can be used when you not only
need a tool for storing and managing data, but also analysing and querying it. In
addition to providing simple database capabilities, Excel also allows you to create
documents for the World Wide Web (WWW).
NOTES
The menus, toolbars and icons of MS Excel are very similar (though not the
same) to MS Word. This is in keeping with Microsoft’s much hyped philosophy
and strategy of offering users a totally integrated office suite pack. From the user’s
point of view, this means less time spent in learning the second package once you
know the first, and almost effortless and seamless exchange of data between various
components.
3. Presentation Graphics
PowerPoint is a presentation tool that helps create eye-catching and effective
presentations in a matter of minutes. A presentation comprises of individual slides
arranged in a sequential manner. Normally, each slide covers a brief topic. The
term ‘Free’ software specifies the freedom of using the software by various
computer users (private individuals as well as organizations and companies) granting
them freedom and control in running and adapting the computing and data
processing as per their needs. The key objective of free software is to grant freedom
rights to users so that the users are free to run, copy, distribute, study, change and
improve the software. For example, you can use PowerPoint software for preparing
presentations and adding notes to the specific slides. Similarly, you have the option
of either printing the slides—in case you want to use an overhead projector—or
simply attach your computer to an LCD display panel that enlarges the picture
several times and shows the output on a screen.
You have three options for creating a new presentation:
(i) Begin by working with a wizard (called the AutoContent Wizard) that
helps you determine the theme, contents and organization of your presentation
by using a predefined outline, or
(ii) Start by picking out a PowerPoint Design Template which determines
the presentation’s colour scheme, fonts and other design features, or
(iii) Begin with a completely blank presentation with the colour scheme, fonts
and other design features set to default values.
If you decide to choose the third option, PowerPoint designers have
provided a wide assortment of predefined slide formats and Clip Art graphics
libraries. Through these predefined slide formats, you can quickly create slides
based on standard layouts and attributes.
PowerPoint shares a common look and feel with other MS Office
components, and having once mastered Word and Excel, learning PowerPoint is
almost like playing a game. And it is also easy to pick up data from Word and
Excel directly into a PowerPoint presentation and vice versa.
Self-Instructional
166 Material
Database Management Software Computer Software
Self-Instructional
170 Material
The key features of all the different versions of Windows XP are given below: Computer Software
Self-Instructional
172 Material
Windows CE Computer Software
10.5 SUMMARY
Self-Instructional
Material 175
Software Development
UNIT 11 SOFTWARE
DEVELOPMENT
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Design and Testing Requirement Analysis
11.3 Design Process
11.4 Models for System Development
11.5 Software Testing Life Cycle
11.6 Software Testing
11.7 Software Paradigms and Programming Methods
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings
11.0 INTRODUCTION
In this unit, you will learn about the process of software development and
testing. Software development process is the process of dividing software
development work into distinct phases to improve design, product management,
and project management. It is also known as a software development life cycle.
You will also learn about the various levels of software testing.
11.1 OBJECTIVES
Self-Instructional
176 Material
Software Development
11.2 DESIGN AND TESTING REQUIREMENT
ANALYSIS
Self-Instructional
Material 177
Software Development Functional Requirements
IEEE defines function requirements as ‘a function that a system or component
must be able to perform.’ These requirements describe the interaction of software
NOTES with its environment and specify the inputs, outputs, external interfaces and the
functions that should be included in the software. Also, the services provided by
functional requirements specify the procedure by which the software should react
to particular inputs or behave in particular situations.
To understand functional requirements properly, let us consider the following
example of an online banking system:
The user of the bank should be able to search the desired services from the
available ones.
There should be appropriate documents for the user to read. This implies
that when a user wants to open an account in the bank, the forms must be
available so that the user can open an account.
After registration, the user should be provided with a unique
acknowledgement number so that he can later be given an account
number.
The aforementioned functional requirements describe the specific services
provided by the online banking system. These requirements indicate user
requirements and specify that functional requirements may be described at different
levels of detail in an online banking system. With the help of these functional
requirements, users can easily view, search and download registration forms and
other information about the bank. On the other hand, if requirements are not stated
properly, they are misinterpreted by software engineers and user requirements are
not met.
Non-Functional Requirements
The non-functional requirements (also known as quality requirements) relate to
system attributes such as reliability and response time. Non-functional requirements
arise due to user requirements, budget constraints, organizational policies, and so
on. These requirements are not related directly to any particular function provided
by the system.
Non-functional requirements should be accomplished in software to make
it perform efficiently. For example, if an aeroplane is unable to fulfill reliability
requirements, it is not approved for safe operation. Similarly, if a real-time control
system is ineffective in accomplishing non-functional requirements, the control
functions cannot operate correctly. Different types of non-functional requirements
are shown in Figure 11.2.
Self-Instructional
178 Material
Software Development
NOTES
Self-Instructional
Material 179
Software Development o Standards requirements: Describe the process standards to be used
during software development. For example, the software should be
developed using standards specified by the ISO (International
Organization for Standardization) and IEEE standards.
NOTES
External requirements: These requirements include all the requirements
that affect the software or its development process externally. External
requirements comprise the following:
o Interoperability requirements: Define the way in which different
computer-based systems interact with each other in one or more
organizations.
o Ethical requirements: Specify the rules and regulations of the software
so that they are acceptable to users.
o Legislative requirements: Ensure that the software operates within
the legal jurisdiction. For example, pirated software should not be sold.
Non-functional requirements are difficult to verify. Hence, it is essential to
write non-functional requirements quantitatively so that they can be tested. For
this, non-functional requirements metrics are used.
Domain Requirements
Requirements that are derived from the application domain of a system, instead
from the needs of the users, are known as domain requirements. These
requirements may be new functional requirements or specify a method to perform
some particular computations. In addition, these requirements include any constraint
that may be present in the existing functional requirements. As domain requirements
reflect the fundamentals of the application domain, it is important to understand
these requirements. Also, if these requirements are not fulfilled, it may be difficult
to make the system work as desired.
A system can include a number of domain requirements. For example, it may
comprise a design constraint that describes an user interface which is capable of
accessing all the databases used in a system. It is important for a development
team to create databases and interface designs as per established standards.
Similarly, the requirements of the user such as copyright restrictions and security
mechanism for the files and documents used in the system are also domain
requirements. When domain requirements are not expressed clearly, it can result
in the following problems:
Problem of understandability: When domain requirements are specified
in the language of application domain (such as mathematical expressions), it
becomes difficult for software engineers to understand them.
Problem of implicitness: When domain experts understand the domain
requirements but do not express these requirements clearly, it may create a
problem (due to incomplete information) for the development team to
understand and implement the requirements in the system.
Self-Instructional
180 Material
Note: Information about requirements is stored in a database, which helps the software Software Development
development team to understand user requirements and develop the software according to
those requirements.
Requirements Analysis
NOTES
IEEE defines requirements analysis as ‘(1) The process of studying user needs
to arrive at a definition of a system, hardware, or software requirements. (2)
The process of studying and refining system, hardware, or software
requirements.’ Requirements analysis helps to understand, interpret, classify, and
organize the software requirements in order to assess the feasibility, completeness
and consistency of the requirements. The other tasks performed using requirements
analysis are
To detect and resolve conflicts that arises due to unclear and unspecified
requirements
To determine operational characteristics of the software and how they
interact with the environment
To understand the problem for which the software is to be developed
To develop an analysis model to analyse the requirements in the software.
Software engineers perform analysis modelling and create an analysis model
to provide information of ‘what’ software should do instead of ‘how’ to fulfil the
requirements of the software. This model focuses on the functions that the software
should perform, the behaviour it should exhibit and the constraints that are applied
on the software. This model also determines the relationship of one component
with the other components. Clear and complete requirements specified in the analysis
model help the software development team to develop the software according to
those requirements. An analysis model is created to help the development team to
assess the quality of the software when it is developed. An analysis model helps to
define a set of requirements that can be validated when the software is developed.
Let us consider an example of constructing a study room where the user
knows the dimensions of the room, the location of doors and windows, and the
available wall space. Before constructing the study room, he provides information
about flooring, wallpaper, and so on to the constructor. This information helps the
constructor to analyse the requirements and prepare an analysis model that describes
the requirements. This model also describes what needs to be done to fulfil those
requirements. Similarly, an analysis model created for the software facilitates the
software development team to understand what is required in the software and
then develop it.
In Figure 11.3, the analysis model connects the system description and the
design model. System description provides information about the entire functionality
of the system, which is achieved by implementing the software, hardware and
data. In addition, the analysis model specifies the software design in the form of a
design model which provides information about the software’s architecture, user
interface and component-level structure.
Self-Instructional
Material 181
Software Development
NOTES
NOTES
Self-Instructional
Material 185
Software Development Provides an insight to all the interested stakeholders that enable them to
communicate with each other
Highlights early design decisions, which have great impact on the software
engineering activities (like coding and testing) that follow the design phase
NOTES
Creates intellectual models of how the system is organized into components
and how these components interact with each other.
Currently, software architecture is represented in an informal and unplanned
manner. Though the architectural concepts are often represented in the infrastructure
(for supporting particular architectural styles) and the initial stages of a system
configuration, the lack of an explicit independent characterization of architecture
restricts the advantages of this design concept in the present scenario.
Note that software architecture comprises two elements of design model, namely,
data design and architectural design. Both these elements have been discussed
later in this chapter.
Patterns
A pattern provides a description of the solution to a recurring design problem of
some specific domain in such a way that the solution can be used again and again.
The objective of each pattern is to provide an insight to a designer who can
determine the following.
Whether the pattern can be reused
Whether the pattern is applicable to the current project
Whether the pattern can be used to develop a similar but functionally or
structurally different design pattern.
Types of Design Patterns
Software engineer can use the design pattern during the entire software design
process. When the analysis model is developed, the designer can examine the
problem description at different levels of abstraction to determine whether it
complies with one or more of the following types of design patterns.
Architectural patterns: These patterns are high-level strategies that refer
to the overall structure and organization of a software system. That is, they
define the elements of a software system such as subsystems, components,
classes, etc. In addition, they also indicate the relationship between the
elements along with the rules and guidelines for specifying these relationships.
Note that architectural patterns are often considered equivalent to software
architecture.
Design patterns: These patterns are medium-level strategies that are used
to solve design problems. They provide a means for the refinement of the
elements (as defined by architectural pattern) of a software system or the
relationship among them. Specific design elements such as relationship among
Self-Instructional
186 Material
components or mechanisms that affect component-to-component interaction Software Development
are addressed by design patterns. Note that design patterns are often
considered equivalent to software components.
Idioms: These patterns are low-level patterns, which are programming-
NOTES
language specific. They describe the implementation of a software
component, the method used for interaction among software components,
etc., in a specific programming language. Note that idioms are often termed
as coding patterns.
Modularity
Modularity is achieved by dividing the software into uniquely named and
addressable components, which are also known as modules. A complex system
(large program) is partitioned into a set of discrete modules in such a way that
each module can be developed independent of other modules (see Figure 11.5).
After developing the modules, they are integrated together to meet the software
requirements. Note that larger the number of modules a system is divided into,
greater will be the effort required to integrate the modules.
NOTES
Information hiding is of immense use when modifications are required during the
testing and maintenance phase. Some of the advantages associated with information
hiding are listed below.
Leads to low coupling
Emphasizes communication through controlled interfaces
Decreases the probability of adverse effects
Restricts the effects of changes in one component on others
Results in higher quality software.
Self-Instructional
190 Material
As stated earlier, the waterfall model comprises several phases, which are listed Software Development
below.
System/information engineering modeling: This phase establishes the
requirements for all parts of the system. Software being a part of the larger
NOTES
system, a subset of these requirements is allocated to it. This system view is
necessary when software interacts with other parts of the system including
hardware, databases, and people. System engineering includes collecting
requirements at the system level while information engineering includes
collecting requirements at a level where all decisions regarding business
strategies are taken.
Requirements analysis: This phase focuses on the requirements of the
software to be developed. It determines the processes that are to be
incorporated during the development of the software. To specify the
requirements, users’ specifications should be clearly understood and their
requirements be analyzed. This phase involves interaction between the users
and the software engineers and produces a document known as Software
Requirements Specification (SRS).
Design: This phase determines the detailed process of developing the
software after the requirements have been analyzed. It utilizes software
requirements defined by the user and translates them into software
representation. In this phase, the emphasis is on finding solutions to the
problems defined in the requirements analysis phase. The software engineer
is mainly concerned with the data structure, algorithmic detail and interface
representations.
Coding: This phase emphasizes translation of design into a programming
language using the coding style and guidelines. The programs created should
be easy to read and understand. All the programs written are documented
according to the specification.
Testing: This phase ensures that the software is developed as per the user’s
requirements. Testing is done to check that the software is running efficiently
and with minimum errors. It focuses on the internal logic and external functions
of the software and ensures that all the statements have been exercised
(tested). Note that testing is a multistage activity, which emphasizes
verification and validation of the software.
Implementation and maintenance: This phase delivers fully functioning
operational software to the user. Once the software is accepted and deployed
at the user’s end, various changes occur due to changes in the external
environment (these include upgrading a new operating system or addition
of a new peripheral device). The changes also occur due to changing
requirements of the user and changes occurring in the field of technology.
Self-Instructional
Material 191
Software Development This phase focuses on modifying software, correcting errors, and improving
the performance of the software.
Various advantages and disadvantages associated with the waterfall model
are listed in Table 11.2.
NOTES
Table 11.2 Advantages and Disadvantages of Waterfall Model
Advantages Disadvantages
Prototyping Model
The prototyping model is applied when detailed information related to input and
output requirements of the system is not available. In this model, it is assumed that
all the requirements may not be known at the start of the development of the
system. It is usually used when a system does not exist or in case of a large and
complex system where there is no manual process to determine the requirements.
This model allows the users to interact and experiment with a working model of
the system known as prototype. The prototype gives the user an actual feel of the
system.
At any stage, if the user is not satisfied with the prototype, it can be discarded and
an entirely new system can be developed. Generally, prototype can be prepared
by the approaches listed below.
By creating main user interfaces without any substantial coding so that users
can get a feel of how the actual system will appear
By abbreviating a version of the system that will perform limited subsets of
functions
By using system components to illustrate the functions that will be included
in the system to be developed.
Self-Instructional
192 Material
Software Development
NOTES
Figure 2.10 illustrates the steps carried out in the prototyping model. These steps
are listed below.
1. Requirements gathering and analysis: A prototyping model begins with
requirements analysis and the requirements of the system are defined in
detail. The user is interviewed in order to know the requirements of the
system.
2. Quick design: When requirements are known, a preliminary design or
quick design for the system is created. It is not a detailed design and includes
only the important aspects of the system, which gives an idea of the system
to the user. A quick design helps in developing the prototype.
3. Build prototype: Information gathered from quick design is modified to
form the first prototype, which represents the working model of the required
system.
4. User evaluation: Next, the proposed system is presented to the user for
thorough evaluation of the prototype to recognize its strengths and
weaknesses such as what is to be added or removed. Comments and
suggestions are collected from the users and provided to the developer.
5. Refining prototype: Once the user evaluates the prototype and if he is not
satisfied, the current prototype is refined according to the requirements.
That is, a new prototype is developed with the additional information
provided by the user. The new prototype is evaluated just like the previous
prototype. This process continues until all the requirements specified by the
user are met. Once the user is satisfied with the developed prototype, a
final system is developed on the basis of the final prototype.
6. Engineer product: Once the requirements are completely met, the user
accepts the final prototype. The final system is evaluated thoroughly followed
by the routine maintenance on regular basis for preventing large-scale failures
and minimizing downtime.
Various advantages and disadvantages associated with the prototyping model
are listed in Table 11.3.
Self-Instructional
Material 193
Software Development Spiral Model
In the 1980s, Boehm introduced a process model known as the spiral model. The
spiral model comprises activities organized in a spiral, and has many cycles. This
NOTES model combines the features of the prototyping model and waterfall model and is
advantageous for large, complex, and expensive projects. It determines
requirements problems in developing the prototypes. In addition, it guides and
measures the need of risk management in each cycle of the spiral model. IEEE
defines the spiral model as ‘a model of the software development process in
which the constituent activities, typical requirements analysis, preliminary
and detailed design, coding, integration, and testing, are performed iteratively
until the software is complete.’
The objective of the spiral model is to emphasize management to evaluate
and resolve risks in the software project. Different areas of risks in the software
project are project overruns, changed requirements, loss of key project personnel,
delay of necessary hardware, competition with other software developers and
technological breakthroughs, which make the project obsolete. Figure 11.10 shows
the spiral model.
Table 11.3 Advantages and Disadvantages of Prototyping Model
Self-Instructional
194 Material
Software Development
NOTES
The spiral model is also similar to the prototyping model as one of the key
features of prototyping is to develop a prototype until the user requirements are
accomplished. The second step of the spiral model functions similarly. The prototype
is developed to clearly understand and achieve the user requirements. If the user
is not satisfied with the prototype, a new prototype known as operational
prototype is developed.
Various advantages and disadvantages associated with the spiral model are
listed in Table 11.4.
Table 11.4 Advantages and Disadvantages of Spiral Model
Advantages Disadvantages
Contd...
Self-Instructional
196 Material
Re-evaluation after each step allows Software Development
changes in user perspectives,
technology advances, or financial
perspectives.
Estimation of budget and schedule gets
realistic as the work progresses. NOTES
Software testing is aimed at identifying any bugs, errors, faults or failures (if any)
present in the software. Bug is defined as a logical mistake which is caused by a
software developer while writing the software code. Error is defined as the measure
of deviation of the output given by the software from the outputs expected by the
user. Fault is defined as the condition that leads to malfunctioning of the software.
Malfunctioning of a software is caused due to several reasons such as change in
Self-Instructional
198 Material
the design, architecture or software code. The defect that causes an error in Software Development
Test Provides an abstract of the entire process and outlines specific tests. The
testing scope, schedule and duration are also outlined
Communication Communication plan (who, what, when, how about the people) is
developed
Defect reporting Specifies the way in which a defect should be documented so that it may
reoccur and be retested and fixed
Environment Describes the data, interfaces, work area and the technical environment
used in testing. All this is specified to reduce or eliminate the
misunderstandings and sources of potential delay
Self-Instructional
200 Material
the test plan) required to execute the test plan. However, if the implementation Software Development
plan (plan that describes how the processes in the software are carried out)
of software changes, the test plan also changes. In this case, the schedule to
execute the test plan also gets affected.
NOTES
Write the test plan: The components of a test plan such as its objectives,
test matrix and administrative component are documented. All these
documents are then collected together to form a complete test plan. These
documents are organized either in an informal or in a formal manner.
In the informal manner, all the documents are collected and kept together.
The testers read all the documents to extract information required for testing
the software. On the other hand, in the formal manner, the important points
are extracted from the documents and kept together. This makes it easy for
testers to analyse only important information, which they require during software
testing.
To avoid this, the test cases must be prepared in such a way that they check the
software with all possible inputs. This process is known as exhaustive testing,
and the test case which is able to perform exhaustive testing is known as ideal test
case. Generally, a test case is unable to perform exhaustive testing; therefore, a NOTES
test case that gives satisfactory results is selected. In order to select a test case,
certain questions should be addressed:
How to select a test case?
On what basis are certain elements of program included or excluded from a
test case?
To provide an answer to these questions, test selection criterion is used,
which specifies the conditions to be met by a set of test cases designed for a given
program. For example, if the criterion is to exercise all the control statements of a
program at least once, then a set of test cases which meets the specified condition
should be selected.
Test Case Generation
The process of generating test cases helps in identifying the problems that exist in
the software requirements and design. For generating a test case, first the criterion
to evaluate a set of test cases is specified and then the set of test cases satisfying
that criterion is generated. There are two methods used to generate test cases:
Code-based test case generation: This approach, also known as
structure-based test case generation, is used to assess the entire software
code to generate test cases. It considers only the actual software code to
generate test cases and is not concerned with user requirements. Test cases
developed using this approach are generally used for performing unit testing.
These test cases can easily test statements, branches, special values and
symbols present in the unit being tested.
Specification-based test case generation: This approach uses
specifications that indicate the functions that are produced by the software
to generate test cases. In other words, it considers only the external view of
the software to generate test cases. It is generally used for integration testing
and system testing to ensure that the software is performing the required
task. Since this approach considers only the external view of the software,
it does not test the design decisions and may not cover all statements of a
program. Moreover, as test cases are derived from specifications, the errors
present in these specifications may remain uncovered.
Several tools, known as test case generators, are used for generating test
cases. In addition to test case generation, these tools specify the components of
the software that are to be tested. An example of test case generator is the ‘astra
quick test,’ which captures business processes in a visual map and generates data-
driven tests automatically.
Self-Instructional
Material 203
Software Development Test Case Specifications
A test plan is neither unrelated to the details of testing units nor it specifies the test
cases to be used for testing units. Thus, test case specification is done in order to
NOTES test each unit separately. Depending on the testing method specified in a test plan,
the features of the unit to be tested are determined. The overall approach stated in
the test plan is refined into two parts: specific test methods and the evaluation
criteria. Based on these test methods and the criteria, the test cases to test the unit
are specified.
For each unit being tested, these test case specifications describe the test
cases, required inputs for test cases, test conditions and the expected outputs
from the test cases. Generally, it is required to specify the test cases before using
them for testing. This is because the effectiveness of testing depends to a great
extent on the nature of test cases.
Test case specifications are written in the form of a document. This is because
the quality of test cases is evaluated by performing a test case review, which
requires a formal document. The review of test case document ensures that the
test cases satisfy the chosen criteria and conform to the policy specified in the test
plan. Another benefit of specifying test cases in a formal document is that it helps
testers to select an effective set of test cases.
Levels of Software Testing
As mentioned earlier, a software is tested at different levels. Initially, individual
units are tested, and once they are tested, they are integrated and checked for
interfaces established between them. After this, the entire software is tested to
ensure that the output produced is according to user requirements. As shown in
Figure 11.15, there are four levels of software testing, namely, unit testing,
integration testing, system testing and acceptance testing.
Self-Instructional
204 Material
Unit Testing Software Development
Unit testing is performed to test individual units of a software. Since the software
comprises various units/modules, detecting errors in these units is simple and
consumes less time, as they are small in size. However, it is possible that the output NOTES
produced by one unit becomes the input for another unit. Hence, if incorrect output
produced by one unit is provided as input to the second unit, then it also produces
wrong output. If this process is not corrected, the entire software may produce
unexpected outputs. To avoid this, all the units in the software are tested
independently using unit testing (see Figure 11.16).
Unit testing is not just performed once during the software development but is
repeated whenever the software is modified or used in a new environment. Some
other points that must be kept in mind about unit testing are:
Each unit is tested separately regardless of other units of the software
The developers themselves perform this testing.
The methods of white-box testing are used in this testing.
Unit testing is used to verify the code produced during software coding and is
responsible for assessing the correctness of a particular unit of source code.
Integration Testing
Once unit testing is complete, integration testing begins. In integration testing, the
units validated during unit testing are combined to form a subsystem. The integration
testing is aimed at ensuring that all the modules work properly as per user
requirements when they are put together (i.e., integrated).
The objective of integration testing is to take all the tested individual modules,
integrate them, test them again, and develop the software according to design
specifications (see Figure 11.17). Some other points that must be kept in mind
about integration testing are:
Self-Instructional
Material 205
Software Development It ensures that all modules work together properly and transfer accurate
data across their interfaces.
It is performed with an intention to uncover errors that lie in the interfaces
among the integrated components.
NOTES
It tests those components that are new or have been modified or affected
due to a change.
The big bang approach and the incremental integration approach are used
to integrate modules of a program. In the big bang approach, initially all modules
are integrated and then the entire program is tested. However, when the entire
program is tested, it is possible that a set of errors is detected. It is difficult to
correct these errors since it is difficult to isolate the exact cause of the errors when
the program is very large. In addition, when one set of errors is corrected, new
sets of errors arise and this process continues indefinitely.
To overcome this problem, incremental integration is followed. The
incremental integration approach tests program in small increments. It is easier
to detect errors in this approach because only a small segment of software code is
tested at a given instance of time. Moreover, interfaces can be tested completely
if this approach is used. Various kinds of approaches are used for performing
incremental integration testing, namely, top-down integration testing, bottom-
up integration testing, regression testing and smoke testing.
Top-Down Integration Testing
In this testing, the software is developed and tested by integrating the individual
modules, moving downwards in the control hierarchy. In top-down integration
testing, initially only one module, known as the main control module, is tested.
After this, all the modules called by it are combined with it and tested. This process
continues till all the modules in the software are integrated and tested.
It is also possible that a module being tested calls some of its subordinate
modules. To simulate the activity of these subordinate modules, a stub is written.
Stub replaces modules that are subordinate to the module being tested. Once the
control is passed to the stub, it manipulates the data as least as possible, verifies
Self-Instructional
206 Material
the entry and passes the control back to the module under test (see Figure 11.18). Software Development
In breadth-first integration, initially all modules at the first level are integrated
moving downwards, integrating all modules at the next lower levels. As shown in
Figure 11.19(b), initially modules A2, A3 and A4 are integrated with module A1
and then it moves down, integrating modules A5 and A5 with module A2, and
Self-Instructional
Material 207
Software Development module A8 with module A3. Finally, module A7 is integrated with module A5.
Bottom-Up Integration Testing
In this testing, individual modules are integrated starting from the bottom and then
NOTES moving upwards in the hierarchy. That is, bottom-up integration testing combines
and tests the modules present at the lower levels proceeding towards the modules
present at higher levels of the control hierarchy.
Some of the low-level modules present in the software are integrated to
form clusters or builds (collection of modules). A test driver that coordinates the
test case input and output is written and the clusters are tested. After the clusters
have been tested, the test driver is removed and the clusters are integrated, moving
upwards in the control hierarchy.
Self-Instructional
208 Material
Software Development
NOTES
Self-Instructional
210 Material
new versions are tested. System testing also tests some properties of the developed Software Development
Self-Instructional
Material 213
Software Development entire computer-based system. It takes various performance factors, such as load,
volume and response time of the system, into consideration and ensures that they
are in accordance with the specifications. It also determines and informs the
software developer about the current performance of the software under various
NOTES parameters (such as the condition to complete the work within a specified time
limit).
Often performance tests and stress tests are used together and require both software
and hardware instrumentation of the system. By instrumenting a system, the tester
can reveal conditions that may result in performance degradation or even failure of
a system. While performance tests are designed to assess the throughput, memory
usage, response time, execution time and device utilization of a system, stress
tests are designed to assess its robustness and error handling capabilities.
Performance testing is used to test several factors that play an important role in
improving the overall performance of the system. Some of these factors are:
Speed: Refers to how quickly a system is able to respond to its users.
Performance testing verifies whether the response is quick enough.
Scalability: Refers to the extent to which a system is able to handle the
load given to it. Performance testing verifies whether the system is able to
handle the load expected by users.
Stability: Refers to how long a system is able to prevent itself from failure.
Performance testing verifies whether the system remains stable under
expected and unexpected loads.
The output produced during performance testing is provided to the system
developer. Based on this output, the developer makes changes to the system in
order to remove the errors. This testing also checks system characteristics such as
its reliability. Other advantages associated with performance testing are:
It assess whether a component or system complies with specified
performance requirements.
It compares different systems to determine which system performs the best.
Acceptance Testing
Acceptance testing is performed to ensure that the functional, behavioural and
performance requirements of software are met. IEEE defines acceptance testing
as a ‘formal testing with respect to user needs, requirements and business
processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized
entity to determine whether or not to accept the system.’
During acceptance testing, software is tested and evaluated by a group of
users either at the developer’s site or user’s site. This enables the users to test the
software themselves and analyze whether it is meeting their requirements. To perform
Self-Instructional
214 Material
acceptance testing, a predetermined set of data is given to the software as input. It Software Development
Self-Instructional
Material 215
Software Development
NOTES
Self-Instructional
216 Material
Software Development
NOTES
Top-down Programming
Top-down programming focuses on the use of modules. It is therefore also known
as modular programming. The program is broken up into small modules so that
it is easy to trace a particular segment of code in the software program. The
modules at the top level are those that perform general tasks and proceed to other
modules to perform a particular task. Each module is based on the functionality of
its functions and procedures. In this approach, programming begins from the top
level of hierarchy and progresses towards the lower levels. The implementation of
modules starts with the main module. After the implementation of the main module,
the subordinate modules are implemented and the process follows in this way. In
top-down programming, there is a risk of implementing data structures as the
modules are dependent on each other and they have to share one or more functions
and procedures. In this way, the functions and procedures are globally visible. In
addition to modules, the top-down programming uses sequences and the nested
levels of commands.
Bottom-up Programming
Bottom-up programming refers to the style of programming where an application
is constructed with the description of modules. The description begins at the bottom
of the hierarchy of modules and progresses through higher levels until it reaches
the top. Bottom-up programming is just the opposite of top-down programming.
Here, the program modules are more general and reusable than top-down
programming.
It is easier to construct functions in bottom-up manner. This is because
bottom-up programming requires a way of passing complicated arguments between
functions. It takes the form of constructing abstract data types in languages
such as C++ or Java, which can be used to implement an entire class of applications
and not only the one that is to be written. It therefore becomes easier to add
Self-Instructional
Material 217
Software Development new features in a bottom-up approach than in a top-down programming
approach.
Structured Programming
NOTES Structured programming is concerned with the structures used in a computer
program. Generally, structures of computer program comprise decisions,
sequences, and loops. The decision structures are used for conditional execution
of statements (for example, ‘if’ statement). The sequence structures are used
for the sequentially executed statements. The loop structures are used for
performing some repetitive tasks in the program.
Structured programming forces a logical structure in the program to be written in
an efficient and understandable manner. The purpose of structured programming
is to make the software code easy to modify when required. Some languages such
as Ada, Pascal, and dBase are designed with features that implement the logical
program structure in the software code. Primarily, the structured programming
focuses on reducing the following statements from the program.
‘GOTO’ statements.
‘Break’ or ‘Continue’ outside the loops.
Multiple exit points to a function, procedure, or subroutine. For example,
multiple ‘Return’ statements should not be used.
Multiple entry points to a function, procedure, or a subroutine.
Structured programming generally makes use of top-down design because
program structure is divided into separate subsections. A defined function or set
of similar functions is kept separately. Due to this separation of functions, they are
easily loaded in the memory. In addition, these functions can be reused in one or
more programs. Each module is tested individually. After testing, they are integrated
with other modules to achieve an overall program structure. Note
that a key characteristic of a structured statement is the presence of single entry
and single exit point. This characteristic implies that during execution, a structured
statement starts from one defined point and terminates at another defined point.
Information Hiding
Information hiding focuses on hiding the non-essential details of functions and
code in a program so that they are inaccessible to other components of the
software. A software developer applies information hiding in software design and
coding to hide unnecessary details from the rest of the program. The objective of
information hiding is to minimize complexities among different modules of the
software. Note that complexities arise when one program or module in software
is dependent on several other programs and modules.
Information hiding is implemented with the help of interfaces. An interface is
a medium of interaction for software components that are using the properties of
Self-Instructional
218 Material
the software modules containing data. The implementation of interfaces depends Software Development
on the syntax and process. Examples of interface include constants, data types,
types of procedures, and so on. Interfaces protect other parts of programs when
a software design is changed.
Generally, the interfaces act as a foundation to modular programming (top- NOTES
down programming) and object-oriented programming. In object-oriented
programming, interface of an object comprises a set of methods, which are used
to interact with the objects of the software programs. Using information hiding, a
single program is divided into several modules. These modules are independent of
each other and can be used interchangeably in other software programs.
To understand the concept of information hiding, let us consider an example
of a program written for ‘car’. The program can be organized in several ways.
One is to arrange modules without using information hiding. In this case, the modules
can be created as ‘front part’, ‘middle part’, and ‘rear part’. On the other hand,
creating modules using information hiding includes specifying names of modules
such as ‘engine’ and ‘steering’.
On comparison, it is found that modules created without using information
hiding affect other modules. This is because when a module is modified, it affects
the data, which does not require modification. However, if modules are created
using information hiding, then modules are concerned only with specific segments
of the program and not the whole program or other parts of the program. In our
example, this statement means that the module ‘engine’ does not have any affect
on the module ‘steering’.
1. The requirements that are commonly considered are classified into three
categories, namely, functional requirements, non-functional requirements and
domain requirements.
2. IEEE defines software design as ‘both a process of defining the architecture,
components, interfaces, and other characteristics of a system or component
and the result of that process.’
3. A test plan describes how testing is accomplished. It is a document that
specifies the purpose, scope and method of software testing.
4. There are four levels of software testing, namely, unit testing, integration
testing, system testing and acceptance testing.
Self-Instructional
Material 219
Software Development
11.9 SUMMARY
Self-Instructional
Material 221
Operating System
BLOCK - IV
FUNDAMENTALS OF OS AND
WORKINGS OF INTERNET
NOTES
12.0 INTRODUCTION
12.1 OBJECTIVES
Why do we need an Operating System (OS)? Or, what are the objectives of
having an OS for a computer? An operating system is a set of programs that NOTES
manages computer hardware resources to provide common services for software
and application programs. Without an operating system, a user cannot run an
application program on their computer unless the application program is executed
with self booting process. For hardware functions, such as input and output, and
memory allocation, the operating system acts as an intermediary between application
programs and the computer hardware although the application code is usually
executed directly by the hardware and will call the operating system or be
interrupted by it. An operating system manages the computer's memory, processes
and all of its software and hardware. There are many objectives to be met for
achieving the ultimate goal of easy to use and human friendly operating system that
transforms the computer to be useful, user friendly, acceptable and affordable to
everyone in this world. We may achieve this ultimate goal through the realization
of the following three major objectives:
Convenience or Ease of Use
Operating systems hide the idiosyncrasies of hardware by providing abstractions
for ease of use. Abstraction hides the low level details of the hardware and provides
high level user friendly functions to use a hardware piece. For example, consider
using a hard disk which records information in terms of magnetic polarities. A hard
disk consists of many cylinders and tracks and sectors within a track and the read/
write heads for writing and reading bits to a sector. A user of a computer system
cannot convert the data to be recorded to a format needed for the disk and issue
low level commands to address the appropriate track read/write head and write
the data on to the right sector. There are software programs called device drivers
for every device types to handle this task. The programmer just issues read/write
commands through the OS (Operating System) and the OS just passes it to the
driver which translates this command to the low level commands that a hard disk
can understand. The OS may issue the same command for reading/writing to a
tape drive or to a printer. However, the device drivers translate it into the proper
low level commands understood by the addressed device. The same is true for
any other electromechanical and electronic devices connected to a computer.
Efficient Allocation and Utilization of Resources
A computer system has various resources like CPU (Central Processing Unit),
memory and I/O devices. Every use of the resources is controlled by the OS.
Executing programs or processes may request for the use of resources and the
OS sanctions the request and allocates the requested resources, if available. The
allocated resources will be taken back from the processes normally after the use
of the same is completed. That is, the OS is considered as the manager of the
Self-Instructional
Material 223
Operating System resources of a computer system. The allocation of resources must be done looking
for overall efficiency and utilization. That is, the allocation must be done to improve
the system through put (number of jobs executed per unit time) and the use of
resources in a fair manner.
NOTES
Ability to Evolve
The OS provides many services to the user community. As the time passes, users
may need new and improved services for accomplishing some task with the
computer. In some situations, as we use the system for doing new task, some of
the existing services may introduce undesirable side effects causing inefficient
utilization of resources or may even compromise with the security of the system
itself. So, it may also be desirable to remove the existing implementations of such
services and introduce new implementations. Also, there can be bugs in the code
that may surface over time as we use the services for various purposes. The design
of the OS must include provisions for easy introduction of new services and
removing or improving/ replacing the existing services. Also, the design must provide
easy interfacing facility to connect and communicate with new types of hardware
devices and upgraded versions of the existing hardware devices.
12.3 FUNCTIONS OF OS
Self-Instructional
Material 227
Operating System The hardware of a computer system is normally very costly. In a multiuser
system, as many users are sharing this costly hardware, the cost is shared among
these users and accordingly, the resource utilization is also high. However, as the
operating system has to switch between many users in a short time, there are
NOTES some unproductive computations, called overheads computations that are done
for job switching and associated work.
5. Single User System
A computer system in which only one user can work at a time is called a single
user system. The familiar Intel processor based Windows OS Personal Computer
(PC) is an example of single user system. Such a typical system has a single
keyboard, mouse and monitor as I/O devices for the user to interact with the
system. Users apply commands through the keyboard and mouse and the computer
displays its responses (output) on the monitor. A user can do various tasks like
preparation of a document or editing a program or writing a letter and printing it
(using a printer). A user can simultaneously or concurrently execute many tasks or
jobs in currently available personal computers. An important design issue of such
interactive systems is the response time, which is the time taken by the computer
to start producing output after a command has been entered. The response time
must be within a predictable limit for user satisfaction and acceptability. When one
user is working on the computer, the other users have to wait till the current user
finishes his work and leaves. Examples of single user operating systems are MS
DOS, Windows 95, Windows NT, Windows 2000 and Windows Vista, each of
which runs on Intel Processor based computers. Other examples are Macintosh
OS X.x, Linux single user OS which are actually derivatives of UNIX operating
system.
6. Multitasking System
Multitasking is concerned with a single user executing more than one program
simultaneously. It may also mean a user executing his application as many concurrent
processes. An operating system with multitasking ability runs many applications—
for example, Microsoft Office package, Gaming applications, Internet explorer,
etc., simultaneously/concurrently. The operating system executes each for a small
time slice in a round robin fashion so that the user cannot distinguish the switching
of CPU among different applications. It appears to the user that they are running
simultaneously.
The advantages of this system are as follows:
(i) The ability of multitasking system that permits the user to run more than
one task simultaneously, leads to increased productivity.
(ii) Improves the system resource utilization, throughput and overall efficiency
of the system.
Self-Instructional
228 Material
The disadvantages are as follows: Operating System
Programs must be loaded into the memory for execution. During this period, the
program is called a process. In a multiprogramming environment, several processes
and executed in the system. The memory for all the processes must be allocated.
If the memory size is 1000KB and we have 5 processes each of size 200 KB,
then the memory for all the processes can be allocated without any difficulty. If the
processes size is greater than the total memory size, then we cannot execute those
processes.
Using the virtual memory technique, we can execute a process which is
larger than the physical memory. Using virtual memory, the physical memory can
be assumed to be an extremely large memory. When the virtual memory technique
is used, the programmer need not worry about the physical memory, which means
you can execute programs that are larger than the size of the physical memory.
Virtual memory decreases the performance of the system and is difficult to
implement. The main advantage of virtual memory is that it uses the existing physical
memory effectively.
If we closely inspect the program execution, the entire program is not needed
inside the memory. If the function SQRT is to be executed, then it must be present
in the memory. Sometimes, the programs have the error code that is executed
Self-Instructional
230 Material
when the error occurs. If the error does not occur in the program execution, then Operating System
the error handling code is not needed to be loaded in the memory. Arrays may be
declared to be of size 500 and only then some of the elements may be used. Some
of the functions in the program are never called; so they are not required to be in
the memory. NOTES
In some of the cases, the entire program is needed but the CPU executes
only one instruction at a time if that instruction is present in the memory.
The virtual memory separates the logical memory from the physical memory.
This separation results in large virtual memory for the users. Virtual memory allows
sharing of files and memory by different processes. Here the pages are shared,
which improves the performance at the time of the process creation.
An operating system is a complex and normally huge software used to control and
coordinate the hardware resources like a CPU, memory and I/O devices to enable
easy interaction of the computer with human and other applications. The objects
or entities that an operating system manages or deals with include processes,
memory space, files, I/O devices and networks. Let us first briefly describe each
of these entities.
Process: A process is simply the program in execution. For every program
to execute, the operating system creates a process. A process needs resources
like a CPU, memory and I/O devices to execute a program. These resources are
under the control of the operating system. In a computer, there will be many
programs in the state of execution, hence a large number of processes demanding
various resources are also needed to be maintained and managed in an operating
system. When the execution is finished, the resources held by that process will be
returned back to the operating system.
Memory Space: The execution of a program needs memory. The available
memory is divided among various programs that are needed to execute
simultaneously (concurrently) in a time multiplexed way. In normal type of memory,
at a time only one memory location can be accessed. So, in a uni-processor
environment, because the secondary storage devices are much slower than main
memory, the programs to be executed concurrently are loaded into memory and
kept ready awaiting the CPU for fast overall execution speed. The memory is
space multiplexed to load and execute more than one programs in a time interleaved
way. Even if more than one CPU is available, at a time only one program memory
area can be accessed. However, instruction that does not need main memory
access (access can be from local memory or from CPU cache) can be executed
simultaneously in a multiprocessor scenario.
Files: Files are used to store sequence of bits, bytes, words or records.
Files are an abstract concept used to represent storage of logically similar entities
Self-Instructional
Material 231
Operating System or objects. A file may be a data file or a program file. A file with no format for
interpreting its content is called a plain unformatted file. Formatted file content can
be interpreted and is more portable as one knows in what way the data inside is
structured and what it represents. Example formats are Joint Photographic Experts
NOTES Group (JPEG), Moving Picture Experts Group (MPEG), Graphical Interchange
Format (GIF), Executable (Exe), MPEG Audio Layer III (MP3), etc.
I/O Devices: Primary memory like RAM is volatile and the data or program
stored there will be lost when we switch off the power to the computer system or
when it is shutdown. Secondary storage devices are needed to permanently
preserve program and data. Also, as the amount of primary storage that can be
provided will be limited due to reasons of cost, a secondary storage is needed to
store the huge amount data needed for various applications.
Network: Network is the interconnection system between one computer
and the other computers located in the same desk/ room, same building, adjacent
building or in any geographical locations over the world. We can have wired or
wireless network connections to other computers located anywhere in the world.
abnormally.
I/O Operations: Almost all the programs require I/O involving a file or an
I/O device. For efficiency and protection, the operating system must provide
NOTES
a means to perform I/O instead of leaving it for users to handle I/O devices
directly.
File-System Manipulation: Often, programs need to manipulate files and
directories, such as creating a new file, writing contents to a file, deleting or
searching a file by providing its name, etc. Some programs may also need
to manage permissions for files or directories to allow or deny other programs
requests to access these files or directories.
Communication: A process executing in one computer may need to
exchange information with the processes executing on the same computer
or on a different computer connected via a computer network. The
information is moved between processes by the operating system.
Error Detection: There is always a possibility of occurrence of error in the
computer system. Error may occur in the CPU, memory, I/O devices, or in
user program. Examples of errors include an attempt to access an illegal
memory location, power failure, link failure on a network, too long use of
CPU by a user program, etc. The operating system must be constantly
aware of possible errors, and should take appropriate action in the event of
occurrence of error to ensure correct and consistent computing.
In addition to these services, operating system also provides another set of
services that ensures the efficient operations of the system in multiprogramming
environment. These services exist not for helping the users, rather for ensuring the
efficient and secure execution of programs.
Resource Allocation: In case of multiprogramming, many programs execute
concurrently, each of which require many different types of resources, such
as CPU cycles, memory, I/O devices, etc. Therefore, in such environment,
operating system must allocate resources to programs in a manner such
that resources are utilized efficiently, and no program should wait forever
for other programs to complete their execution.
Protection and Security: Protection involves ensuring controlled access
to the system resources. In a multiuser or a networked computer system,
the owner of information may want to protect information. When several
processes execute concurrently, a process should not be allowed to interfere
with other processes or with the operating system itself. Security involves
protecting the system from unauthorized users. To provide security, each
user should authenticate him or her to the system before accessing system
resources. A common means of authenticating users is user-name/password
mechanism.
Self-Instructional
Material 233
Operating System Accounting: We may want to keep track of usage of system resources by
each individual user. This information may be used for accounting so that
users can be billed or for accumulating usage statistics, which is valuable for
researchers.
NOTES
12.8 OPERATING SYSTEM SECURITY
hough the terms security and protection are usually used interchangeably, they
have different meanings in computer terminology. Security deals with the threats
to information caused by the outsiders (non-users), whereas protection deals
with the threats caused by other users (those who are not authorized to do what
they are doing) of the system.
Security Threats
Security threats continue to evolve around us by finding new ways. Some of them
are caused by humans, some are by nature, such as floods, earthquakes and fire,
and some are by the use of Internet, such as virus, Trojan Horse, spyware, and so
on.
The person who tries to breach the security of a system and cause the harm
is known as intruder or hacker in computer world. Once the intruder gets access
of the systems in an organization, he or she may steal the confidential data, modify
it in place, or delete it. The stolen data or information can then be used for illegal
activities like blackmailing the organization, selling the information to competitors,
etc. This attack proves to be more destructive, if the data deleted by intruder
cannot be recovered back by the organization.
Security can also be affected by natural calamities, such as earthquakes,
floods, wars, storms, etc. Such disasters are beyond the control of humans and
can result in huge loss of data. The only way to deal with these threats is to maintain
timely and proper backups of data at geographically apart locations.
The use of the Internet is another possible cause that can affect the security.
With the evolving use of the Internet across the world, the system connected to it
is always prone to get affected by the following threats:
Virus: Virus is a program, which is designed to replicate, attach to other
programs and perform unsolicited and malicious actions. It executes when
an infected program is executed. On MS DOS systems, these files usually
have the extensions .exe, .com or .bat. Virus enters computer systems
from an external software source and easily hides in software. It may become
destructive as soon as it enters a system or can be programmed to lie dormant
until activated by a trigger.
Trojan Horse: Trojan Horse is a program which is designed to damage
files by allowing hacker to access your system. It enters into a computer
system through an e-mail or free programs that have been downloaded
Self-Instructional
234 Material
from the Internet. Once safely gets into the computer, it usually opens the Operating System
way for other malicious software (like viruses) to enter into the computer
system. In addition, it may also allow unauthorized users to access the
information stored in the computer.
NOTES
Spyware: Spyware are the small programs that install themselves on
computers to gather data secretly about the computer user without his/her
consent and report the collected data to interested users or parties. The
information gathered by the spyware may include e-mail addresses and
passwords, net surfing activities, credit card information, etc. The spyware
often gets automatically installed on your computer when you download a
program from the Internet or click any option from the pop-up window in
the browser.
Hackers and Phishing: Hackers are the programmers who break into
others computer systems in order to steal, damage or change the information
as they want. On the other hand, phishing is a form of threat that attempts to
steal the sensitive data (financial or personal) with the help of fraudulent
e-mails and messages.
These security threats may result in loss of data confidentiality and data
integrity, and/or may result in system unavailability. Hence, system designers should
adopt some strict mechanisms to prevent these attacks, such as applying firewalls
to prevent the unauthorized use of data, enabling the use of anti-virus programs,
and keeping a strict eye on new programs loaded into the computer system.
Design Principles for Security
Designing a secure operating system is a crucial task. While designing the operating
system, the major concern of designers is on the internal security mechanisms that
lay the foundation for implementing security policies. Researchers have identified
certain principles that can be followed to design a secure system. Some design
principles presented by Saltzer and Schroeder (1975) are as follows:
Least Privilege: This principle states that a process should be allowed the
minimal privileges that are required to accomplish its task.
Fail-Safe Default: This principle states that access rights should be provided
to a process on its explicit requests only and the default should be no access.
Complete Mediation: This principle states that each access request for
every object should be checked by an efficient checking mechanism in order
to verify the legality of access.
User Acceptability: This principle states that the mechanism used for
protection should be acceptable to the users and should be easy to use.
Otherwise, the users may feel a burden in following the protection mechanism.
Economy of Mechanism: This principle states that the protection
mechanism should be kept simple as it helps in verification and correct
implementations. Self-Instructional
Material 235
Operating System · Least Common Mechanism: This principle states that the amount of
mechanism common to and depended upon by multiple users should be
kept as minimum as possible.
Open Design: This principle states that the design of the security mechanism
NOTES
should be open to all and should not depend on ignorance of intruders. This
entails to the use of cryptographic systems where the algorithms are made
public while the keys are kept secret.
Separation of Privileges: This principle states that the access to an object
should not depend only on fulfilling a single condition; rather more than one
condition should be fulfilled before granting an access to the object.
User Authentication and Authorization
One way using which operating system provides security is by controlling the
access to the system. This approach often raises a few questions, such as:
Who is allowed to login into the system?
How can a user prove that he is a true user of the system?
Some process is required that lets the users to present their identity to the
system to conform their correctness. This process of verifying the identity of a
user is termed as authentication. User authentication can be based on user
knowledge (such as, a username and password), user possession (such as, a card
or key) and/or user attribute (such as, fingerprint, retina pattern or iris design).
Authentication is simply proving that a user’s identity claim is valid and
authentic. Authentication requires some form of ‘proof of identity.’ In network
technologies, physical proof (such as a driver’s license or photo ID) cannot be
employed, so you have to get something else from a user. That typically means
having the user respond to a challenge to provide genuine credentials at the time of
the requests access. For our purposes, credentials can be something the user
knows, something the user has, or something they are. Once they provide
authentication, there also has to be authorization, or permission to enter. Finally,
you want to have some record of users’ entry into your network—username, time
of entry and resources. That is the accounting side of the process.
Authorization is independent of authentication. A user can be permitted entry
into the network but not be authorized to access a resource. You do not want an
employee having access to HR information or a corporate partner getting access
to confidential or proprietary information. Authorization requires a set of rules that
dictate the resources to which a user will have access. These permissions are
established in your security policy.
The security regarding the implementation of the programs through
guaranteeing that the resources will be consumed only by the programs and users
that have relevant authorizations is also the responsibility of the OS.
Self-Instructional
236 Material
Authentication and authorization are the two important factors that decide Operating System
Self-Instructional
Material 237
Operating System Biometric Techniques
Biometric authentication technologies (see Figure 12.1) use the unique
characteristics (or attributes) of an individual to authenticate the person’s identity.
NOTES These include physiological attributes (such as, fingerprints, hand geometry, or
retina patterns) or behavioural attributes (such as, voice patterns and hand-written
signatures). Biometric authentication technologies based on these attributes have
been developed for computer log in applications. Biometric authentication is
technically complex and expensive, and user acceptance can be difficult.
Biometric systems provide an increased level of security for computer
systems, but the technology is still new as compared to memory tokens or
smart tokens. Biometric authentication devices provide imperfection, which
may result from technical difficulties in measuring and profiling physical attributes
as well as from the somewhat variable nature of physical attributes. These
attributes may change, depending on various conditions. For example, a
person’s speech pattern may change under stressful conditions or when suffering
from a sore throat or cold. Due to their relatively high cost, biometric systems
are typically used with other authentication means in environments where high
security is required.
Self-Instructional
Material 239
Operating System
12.12 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Self-Instructional
240 Material
Internet and Its Working
13.0 INTRODUCTION
The Internet, World Wide Web (WWW) and information super highway have
penetrated into the lives of millions of people all over the world. The Internet is a
network made up of thousands of networks worldwide. Obviously, these networks
are composed of computers and other intelligent and active devices. In fact, the
Internet is an example of self-regulating mechanism and there is no one in charge
of the Internet. There are organizations which are entrusted to develop technical
aspects of this network, but no governing body is in control. Private companies
own the Internet backbone, through which the Internet traffic or data flows in the
form of text, video, graphics, sound, image, etc.
13.1 OBJECTIVES
13.2 DEFINITION
Ethernet
10 Mbps
Token-ring
4 Mbps.
10 Mbps
Router
Router
Router
Ethernet
10 Mbps
Fig. 13.1 Local Area Networks Connected to the Internet via Gateways or Routers
Self-Instructional
242 Material
Internet and Its Working
13.3 HISTORY OF THE INTERNET
The Internet, www and Information Super Highway are terms which have deep
impact on the lives of millions of people all over the world. This global nature of NOTES
the internet could not be possible without the development of Transmission Control
Protocol/Internet Protocol (TCP/IP). This is the protocol suite developed
specifically for the Internet. The information technology revolution of today cannot
be achieved without this vast network of networks.
During late 1960s and 1970s, organizations were inundated with many
different LAN and WAN technologies such as packet switching technology, collision-
detection local area networks, hierarchical enterprise networks, and many other
excellent technologies. The major drawbacks of all these technologies were that
they could not communicate to each other without expensive deployment of
communication devices. These were not only expensive, but also put users at the
mercy of the monopoly of the vendor they would be dealing with. Consequently,
multiple networking models were available as a result of the research and development
efforts made by many interest groups. This paved the way for development of another
aspect of networking known as protocol layering. This allows applications to
communicate with each other. A complete range of architectural models was proposed
and implemented by various research teams and computer manufacturers. The result
of all this great know-how is that today, any group of users can find a physical
network and an architectural model suitable for their specific needs. This includes
cheap asynchronous lines with no other error recovery than a bit-per-bit parity
function, through full-function wide area networks (public or private) with reliable
protocols such as public packet switching networks or private SNA networks, to
high-speed but limited-distance local area networks.
This is now evident that organizations or users are using different network
technology to connect computers over the network. The desire to share more and
more information among homogeneous or heterogeneous interest group motivated
the researcher to devise the technology so that one group of users may extend its
information system to another group of users who happen to have a different
network technology and different network protocols. This necessity was recognized
in the beginning of the 1970s by a group of researchers in the United States of
America who hit upon a new principle popularly known as internetworking. Other
organizations also became involved in this area of interconnecting networks, such
as ITU-T (formerly CCITT) and ISO. All were trying to define a set of protocols,
layered in a well-defined suite, so that applications would be able to communicate
to other applications, regardless of the underlying network technology and the
operating systems where those applications run.
ARPanet
ARPAnet was built by DARPA. This initiated the packet switching technology in
the world of networking and therefore, it is sometimes referred as the ‘grand- Self-Instructional
Material 243
Internet and Its Working daddy of packet networks’. The ARPAnet was established in the late 1960s for
the Department of Defense to accommodate research equipment on packet
switching technology, besides allowing resource sharing for the Department of
Defense’s contractors. This network includes research centres, some military bases
NOTES and government locations. It soon became popular with researchers for collaboration
through electronic mail and other services. ARPAnet formed the beginning of the
Internet.
ARPAnet provided interconnection of various packet-switching nodes
(PSN) located across the continental USA and western Europe using 56 Kbps
leased lines. ARPAnet provided connection to minicomputers running a protocol
known as 1822 (after the number of a report describing it) and dedicated to the
packet-switching task. Each PSN had at least two connections to other PSNs (to
allow alternate routing in case of circuit failure) and up to twenty-two ports for
user computer connections. Later on, DARPA replaced the 1822 packet switching
technology with the CCITT X.25 standard. Subsequently, the excessive increase
in the data traffic made the capacity of the existing lines (56kbps) insufficient.
ARPAnet has now been replaced with new technologies.
Internet 2
The success of the Internet and the subsequent alternatives frequent congestion of
the existing backbones led the research community for other alternatives. The
university community, therefore, together with government and industry partners,
and encouraged by the funding agencies have formed the Internet2 project. Internet2
has the following principle objectives:
To create a high bandwidth, leading-edge network capability for the research
community in the US
To enable a new generation of applications and communication technologies
to fully exploit the capabilities of broadband networks
To rapidly transfer newly developed technologies to all levels of education
and to the broader Internet community, both in the US and abroad.
The Internet protocols enable the transfer of data over networks and/or the Internet
in an efficient manner. When various computers are connected through a computer NOTES
network, it becomes necessary to use a protocol to efficiently use network
bandwidth and avoid collisions.
A network protocol defines a language that contains the rules and conventions
which are necessary for reliable communication between different devices over a
network. For example, it includes rules that specify how to package data into
messages, how to acknowledge messages and how to compress data.
There are a number of Internet protocols used. The most commonly used
protocols are as follows:
Transmission Control Protocol/Internet Protocol (TCP/IP)
HyperText Transfer Protocol (HTTP)
File Transfer Protocol (FTP)
Telnet
(a) Transmission Control Protocol/Internet Protocol or TCP/IP
TCP/IP is a protocol suite that is used to transfer data over the Internet. The two
main protocols in this protocol suite are:
TCP: It forms the higher layer of TCP/IP and divides a file or message into
smaller packets, which are then transmitted over the Internet. Following this, a
TCP layer receives these packets on the other side and reassembles them into the
original message.
When two computers seek a reliable communication between each other,
they establish a connection. This is analogous to making a telephone call. If you
want to speak to your uncle in New York, a connection is established when you
dial his phone number and he answers. The TCP guarantees that data sent from
one end of the connection actually reaches the other end in the same order it was
sent. Otherwise, an error is reported.
IP: It forms the lower layer of the protocol suite. The address part of all the
packets is handled by it in such a manner that they reach the desirable destination.
Usually, this address is checked by each gateway computer on the network to
identify where the message is to be forwarded. This implies that all the packets of
a message are delivered to the destination regardless of the route used for delivering
the packets.
Therefore, IP is responsible for routing datagrams (packets) from one host
to another one over a network. In a network, information is passed in the form of
‘packets’. The IP does not verify that the packet really reaches its destination, nor
is it the task of the IP to make sure that it reaches error free and in the correct
order.
Self-Instructional
Material 245
Internet and Its Working The working of TCP/IP can be compared to shifting your residence to a
new location. It involves packing your belongings in smaller boxes for easy
transportation, with the new address and a number written on each of them. You
then load them on multiple vehicles. These vehicles may take different routes to
NOTES reach the same destination. The delivery time of vehicles depends on the amount
of traffic and the length of the route. Once the boxes are delivered to the destination,
you check them to make sure that all of them have been delivered in good shape.
After that, you unpack the boxes and reassemble your house.
(b) HyperText Transfer Protocol or HTTP
HTTP is a protocol that transfers files (image, text, video, sound and other multimedia
files) using the Internet (see Figure 13.2). It runs on top of the TCP/IP protocol
suite and is an application protocol that forms the foundation protocol of the Internet.
It assists in defining how messages are transmitted and formatted, and specifies
the actions that Web browsers and Web servers must engage in response to the
issued commands. HTTP is based on a client-server architecture where the Web
browser acts as an HTTP client making requests to the Web server machines. In
addition to Web pages, a server machine contains an HTTP daemon that handles
the Web page requests. Typically, when a user clicks on a hypertext link or types
a URL (Uniform Resource Locator), an HTTP request is built by the browser and
is sent to the IP address specified in the URL. This request is then received by the
HTTP daemon on the destination server, which, in response, sends back the Web
page that is requested.
HTTP is a stateless protocol, which means each request is processed
independently without any knowledge of the previous request. This is why
server side programming languages, such as JSP, PHP and ASP.NET have
gained popularity.
www.yahoo.com
Request
Response
HTTP Client Server
transfer.
NOTES
Self-Instructional
248 Material
The DNS hierarchical name architecture follows a directory structure and Internet and Its Working
organizes names from most general types to most specific types. In this manner,
the DNS name space allows names to be arranged into a hierarchy of domains
looking as an inverted tree. In relation to computer terminology, it looks like the
directory structure of a file system. Every standalone internetwork will define its NOTES
own name space with unique hierarchical structure.
The domain name components are represented in English words separated
by dots, for example, www.hotmail.com, www.yahoo.co.in, etc. Each name
separated by dots is subdomains and are managed by a separate authorized server,
for example, ‘.com’ authorized server or servers having the name manage all
domains ‘*.com’.
The DNS name space defines an inverted tree type structures, i.e., the
DNS tree grows from the top down. There are certain terminologies in relation to
DNS tree that are defined below:
Root: The DNS tree grows from top to down, therefore root occupies the top of
the DNS name structure. However, it does not define any name and is considered
null. The root domain is the parent of all the domains in the hierarchy.
Branch: It refers to any next closest part of DNS hierarchy and describes a
domain with subdomains and objects within it. Like a real tree, all branches connect
themselves to the root.
Leaf: Beneath the leaf, no object is defined and therefore it is an ‘end object’ in
the structure. They are also referred as interior nodes, indicating that they occupy
a position in the middle of the structure.
Top Level Domains: They come directly under the root of the tree and therefore
referred as the highest level domains. Other name is first level domains. Similarly,
the domains placed directly beneath the top level domains are called the second
level domains, and so on. The TLDs are considered children domains. A peer at
the same level in the hierarchy is known as sibling which defines that all the TLDs
are siblings with root domain as the parent.
Subdomains: They are located directly below the second level domains.
Conclusively, a domain is either a collection of objects, which represents a
branch of the tree or a specific leaf. Thus, a DNS name space is organized as a
true topological tree with one parent only without any loops. The DNS name
space is logical structure without having any relevance with physical locations of
devices.
Naming in DNS
It involves DNS labels and label syntax rules in which each domain or node is
described with a text label so as the domain may be identified within the structure.
Syntax rules are:
Length: The character length may be of 0-63 characters. However, 1-20 character
length is widely used.
Self-Instructional
Material 249
Internet and Its Working Symbols: Name can be described with letters, numbers and the dash symbol (‘–
’) only.
Case: They are not case sensitive, and lower and upper case for same label is
equivalent. Every label needs to be unique within its parent domain but need not to
NOTES
be unique across domains.
Creating Domain Names
The individual domain within the domain name structure is uniquely created using
the sequence of labels beginning from the root of the tree to the target domain
from right to left separated by dots to provide a formal name to the domain. The
root of the name space is defined with a zero length or ‘null’ name by default. The
DNS name length is limited to 255 characters to describe a complete domain
name.
The Domain Name may be either a Fully Qualified Domain Name (FQDN)
or a Partially Qualified Domain Name (PQDN). A FQDN assigns full path of
labels beginning from the root of the tree down to the target node to uniquely
identify that node in the DNS name space. Unlike FQDN, the PQDN only describes
a part of a domain name to provide a relative name for a particular context.
It is essential to have an authority structure to manage unique TLDs. Erstwhile
the Network Information Center, now known as the Internet Assigned Numbers
Authority (IANA) is the central DNS authority for the Internet to create TLDs
name. In some cases, IANA delegates their power for some of the TLDs to other
organizations. Thus multiple authorities work in assigning and registering Domain
Name. The authority for lower level domain is entrusted with the organization,
which belongs to the second level domain. Conclusively, the DNS name space of
the Internet is managed by several authorities arranged hierarchically in the similar
manner as DNS name space.
Connecting to the Internet
There are various ways to connect to the Internet. Some of the common options
are described here:
Direct Connection
Through a direct connection, a machine is directly connected to the Internet
backbone and acts as a gateway. Though a direct connection provides complete
access to all Internet services, it is very expensive to implement and maintain.
Direct connections are suitable only for very large organizations or companies.
Through the Internet Service Provider
You can also connect, to the Internet through the gateways provided by the Internet
Service Providers or ISPs. The range of the Internet services varies depending on
the ISPs. Therefore, you should use the Internet services of the ISP that is best
Self-Instructional
250 Material
suitable for your requirements. You can connect to your ISP using the following Internet and Its Working
two methods:
Remote Dial-up Connection
A dial-up connection illustrated in Figure 13.4, enables you to connect to your NOTES
ISP using a modem. A modem converts the computer bits or digital signals to
modulated or analog signals that the phone lines can transmit and vice versa. Dial-
up connections use either SLIP (Serial Line Internet Protocol) or PPP (Point-to-
Point Protocol) for transferring information over the Internet.
User’s
Computer
Modem Modem
Internet Backbone
For dial-up connections, regular telephone lines are used. Therefore, the quality
of the connection may not always be very good.
Until the end of 1995, the conventional wisdom was that 28.8 Kbps was
about the fastest speed you could squeeze out of a regular copper telephone line.
Today, data transmission for a dial-up connection is typically 56 Kbps. The key
information here is to know which speed modem is supported by your ISP. If your
ISP supports only a 28.8 Kbps modem on its end of the line, you would be able
to connect to the Net only at 28.8 Kbps, even if you had the faster modem.
Permanent Dedicated Connection
You can also have a dedicated Internet connection that connects you to ISP through
a dedicated phone line. A dedicated Internet connection is a permanent telephone
connection between two points. Computer networks that are physically separated
are often connected using leased or dedicated lines. These lines are preferred
because they are always open for communication traffic unlike the regular telephone
lines that require a dialling sequence to be activated. Often, this line is an ISDN
line that allows transmission of data, voice, video and graphics at very high speeds.
ISDN or Integrated Services Digital Network applications have revolutionized
the way businesses communicate. ISDN lines support upward scalability, which
means that you can transparently add more lines to get faster speeds, going up to
1.28 Mbps (Million bits per second).
T1 and T3 are the two other types of commonly used dedicated line types
for the Internet connections. Dedicated lines are becoming popular because of
their faster data transfer rates. Dedicated lines are cost effective for businesses
that use the Internet services extensively. Self-Instructional
Material 251
Internet and Its Working Digital Subscriber Line or DSL
DSL is a high speed technology that has recently gained popularity. It can carry
both data and voice over telephone lines. It is possible for a DSL line to stay
NOTES connected to the Internet which means that you do not have to dial-up every time
you wish to go online. Usually with DSL, data can be downloaded at rates that
can go up to 1.544 Mbps and data can be sent at 128 Kbps. Because DSL lines
carry both data and voice, a separate phone line does not have to be installed.
DSL services can be established using your existing lines, as long as the service is
offered in your locality and your system lies within the appropriate distance from
the central switching office of the telephone company.
DSL services require special modems and network cards to be installed on
your computer. The cost of equipment, the monthly service and DSL installation
charges may vary considerably, but still the ISP is recommended. It may be noted
here that prices are now declining due to the increasing competition.
Cable Modems
You can also connect to the Internet at high speeds through cable TV. Since their
speeds go up to 36 Mbps, cable modems make it possible for data to be
downloaded in a matter of seconds, where they might take fifty times longer with
dial-up connections. Since they work over TV cables, they do not tie up telephone
lines and also do not require you to specifically connect as in the case of dial-up
connections.
Integrated Services Digital Network or ISDN
ISDN services are an older, but nevertheless viable technology, provided by
telephone companies. ISDN lines transfer data at rates that range between 57
Kbps and 128 Kbps. These leased lines can have two configurations, namely, T1
and T3. T1 (the commonly used connection option) is a dedicated connection that
allows data to be transferred at a speed of 1.54 Mbps or Million bits per second.
This proves to be beneficial for computers and Web servers that remain connected
to the Net at all times. Portions of a T1 line can also be leased using either the
Fractional T1 or the Frame Relay systems. They can be leased in blocks that
range between 128 Kbps and 1.5 Mbps.
Since leased lines cost more, they are usually used only by those companies
whose businesses revolve around the Internet or who engage extensively in large
data transfers.
Prerequisites for the Internet
Following are the prerequisites for the Internet:
Hardware
The hardware requirement varies from case to case.
Like in case of a dial-up connection, a computer with a serial port for
Self-Instructional
connecting an external modem or a spare expansion slot for connecting an
252 Material
internal modem card is needed. In the case of Broadband or DSL Internet and Its Working
Others
A telephone connection in case of a dial-up modem.
An Internet account. If you want to have Internet access at your home, you
will need to sign up with an Internet Service Provider (ISP) and have an
Internet account. Some common ISPs are Mahanagar Telephone Nigam
Limited or MTNL, Videsh Sanchar Nigam Limited or VSNL, Airtel, Sify,
etc.
Self-Instructional
Material 253
Internet and Its Working Factors Affecting the Internet Connectivity
Speed of the Modem: A modem with a minimum speed of 56 Kbps or
higher is recommended.
NOTES Quality of Phone Line: In the case of dial-up networks, the Internet
connections with modems can get disrupted by the level of noise on the
phone line that runs into your home.
Internet Traffic: The traffic on the Web generally expands throughout the
day, reaching its peak during early evening. Therefore, it is advisable to
schedule your downloading activity and faster surfing at off peak hours.
The following are the factors associated with your computer affecting Internet
Speed:
1. A faster processor (2-3 GHz or higher) allows faster surfing on the
Web.
2. A computer system with a better memory (RAM) enables faster surfing.
You can prevent slowdowns by refraining from working with other
software applications while you are surfing.
3. Web surfing is also adversely affected by an exceedingly fragmented
hard disk. So it is advisable to defragment the hard drive at reasonably
frequent intervals and keep it optimized.
4. The Web browser’s Cache refers to a storage area on the computer’s
hard disk. While you are surfing, the Web pages you visit are stored in
the cache upto the disk space limit set by you. As a result, the Web
browser displays the cached pages faster, since they may be retrieved
from the hard disk and not from the Internet. You can increase the cache
limit of your browser.
5. The surfing speed can be further enhanced by using two or more browser
windows simultaneously. This facilitates reading the contents on one page
even as another page gets loaded in the other window. It is also advisable
to turn off Java and image loading in your browser. To do so, go to the
Advanced tab of Internet Options in the Tools menu. This does not
affect the content on a Web page.
Common Terminology
hyperlink
URL Description
http://mysite.com/index.html Fetch a Web page (index.html) using the HTTP protocol
ftp://www.sharware/myzip.exe Fetch an executable file (myzip.exe) using FTP
protocol
Wi-Fi Technology
The Wireless Fidelity or Wi-Fi was used in public switched network which was
originally created by American Telephone & Telegraph or AT&T who used Bell
Self-Instructional
Material 255
Internet and Its Working Laboratories standards to ensure that all central office switches and lines that
carried calls met these preset standards. The standards set by AT&T enabled
everyone to communicate with anyone else regardless of the service provider
since the dialingringing, routing and telephone numbering were all uniform.
NOTES
Wi-Fi Security for Public Networks
For the fast connection and security point of view Wi-Fi is used for public networks.
It is used for wireless devices. The Wi-Fi Alliance represents the wireless standard
protocol and basically non profit organization. This supports interoperability features
for wireless devices. It connects the networking system without cables. But for
this, Wi-Fi and regular ISP services are needed. The manufacturers of Wi-Fi
Alliance build the various devices for 802.11 standards. Approximately 205
companies joined to the Wi-Fi Alliance and almost 900 products have been certified
to the interoperable system. These companies give assurance that the Wi-Fi devices
are connected by physical layer in reference models. Wi-Fi Protected Access
(WPA) solution is been added recently to the Wi-Fi standard. The physical and
access control layer implement the extra enhanced features, such as the Internet
security. The Wi-Fi has grown by leaps and bounds because it is connected via
spectrum. It uses unlicensed 24 GHz and 5 GHz bands. It provides data throughput
for most uses. The prime equipment required for Wi-Fi connection is Wi-Fi PC
card. It is a common way to connect the computer to the Internet without wires.
This card is technically known as Personal Computer Memory Card International
Association or PCMCIA. The two prime solutions are included by this device as
follows:
You can work almost anywhere by mobile Wi-Fi device connected to the
Internet without wires, when away from your home or office.
You can free yourself from the need to drill holes and wires by creating a
network at the home or office using Wi-Fi devices.
connect to the Internet (see Figure 13.5). The three approaches used to make the
searching information are as follows:
You can use the search tool provided by organization. It branches are
NOTES
hosted by Wi-Fi hotspots, such as Starbucks chain across the world.
If you have signed with Wi-Fi provider, you can search via the Internet
Service Provider or ISP.
You can search many cross provider directories that are available with
Web.
The Wi-Fi hotspots are created around the antennas to outlet the radio waves of
wireless networking. It is confined to almost 10000 hotspots in crowded areas,
such as airport lounges, cafes, etc. A series of antennas are set up into the city
wide zones. The Internet connection is facilitated by Wi-Fi chips. The long calls
possible in Wi-Fi are bypassed network and VoIP (Voice Over Internet Protocol)
technologies. The Wi-Fi assembled mobiles and laptops are connected to these
hotspots frequently and the amount is paid after using this technology by credit
card on the login page provided by Web browser. Users can hold their accounts
provided by service providers, such as BT Openzone, SkypeZones, Nintendo
Wi-Fi, T-Mobile, O2, etc.
Internet
DSL
Cable etc
Modem
Wi-Fi
Router Wired Network
Access
Point Client
Wi-Fi Wi-Fi
Devices Devices
In Figure 13.6, the Wi-Fi access point is interconnected with Wi-Fi devices that
are configured with router. Modem is also used to make connection between
router and the Internet DSL cable. The main role of router is to connect the Wi-Fi
access point and the wired network clients. The Voice over IP or VoIP software
enables data, fax calls and voice across IP networks and represents the Internet
telephony allowing a communication between two PCs over packet switching
Internet. It works by encoding voice information. Then it is changed to digital
Self-Instructional
Material 257
Internet and Its Working format. It provides cost benefits by converging data and voice over IP network
into the mobile phones. Many of the latest mobiles are connected with Wi-Fi via
VoIP technology. Between the Internet connection and Wi-Fi access point, there
needs to be a hardware designed to connect with the Internet and share the
NOTES Internetworking connectivity. The following guidelines are required for connecting
the Wi-Fi with public networks:
The Wi-Fi Protected Access 2 or WPA2 encryption is required for the
communications.
The shared keys and certificates are encrypted.
The Service Set IDentification (SSID) should be used for broadcasting.
The Media Access Control or MAC address authentication must be used
for specific system that accesses the Wi-Fi link.
The infrastructure mode must be used for the Internet connectivity.
After getting the Wi-Fi connection, you can know about your shared WPA2
key, MAC address and SSID. The infrastructure mode is used to set the wireless
network card and firewall. The connection of the Internet via Wi-Fi involves the
following equipments:
Wi-Fi device (the client).
A Wi-Fi broadcast unit (the access point).
Network connectivity hardware (router and modem).
Fast Internet connection (usually via cable or DSL).
The Wi-Fi technology is best known for its fast connectivity and speed.
Figure 13.7 illustrates the various wireless standards along with their speed.
10 Base T Wired
802, 11b Wi-Fi
(Theoretic)
Bluetooth
Ethernet
Throughput
(in Mbps) 38 6 10 11 24 54
802, 11z Wi-Fi
(Theoretic)
(Theoretic)
802, 11y Wi-Fi
the Internet.
13.7 SUMMARY
TCP/IP: It is a protocol suite that is used to transfer data over the Internet.
File Transfer Protocol or FTP: It is an application protocol that allows
files to be exchanged between computers through the Internet.
IP address: It is a unique number associated with each computer, making
it uniquely identifiable among all the computers connected to the Internet.
14.0 INTRODUCTION
In this unit, you will learn about the firewalls, encryption and uses of internet. All
computers on the Internet communicate with one another using the transmission
control protocol/internet Protocol architecture, abbreviated to TCP/IP, based on
client/server architecture. This means that the remote server machine provides
files and services to the user’s local client machine. So, there is a possibility of
threats to information. Firewall acts as a hurdle to protect the system. Encryption
is a way to secure information and ensure privacy on the internet. You will also
learn about the cloud system architecture and deployment models.
14.1 OBJECTIVES
Users need to be authenticated before they use any legal e-service. When a
connection is made between the browser and server, the user can be
authenticated using a digital signature. Following are the various ways of NOTES
authenticating a user:
Host name or IP address: This technique authenticates users through the
host name or IP address of their machine. It does not provide a user the
opportunity of mobility. This authentication mechanism is useful when applied
on a small scale. It is impractical to apply this on a large scale because when
the number of computers increases, it becomes difficult to manage the IP
address of each computer.
Fixed passwords: User authentication is very often performed with fixed
passwords. This can be done based on the basic authentication standard,
i.e. the password is provided via a particular pop-up browser window, and
is included in an exclusive HTTP header, which is present in each subsequent
user requests. Fixed passwords are widely used, as they provide transparent
mobility, and also because they are very easy to implement and use. Most
Websites do not rely on the basic authentication standard and implement
their exclusive and individual fixed password scheme, where the user must
provide the password through an HTML form and on receiving the valid
password, the server authenticates the user. The client in subsequent requests
should include this password within the session with the server. This gets
automatically done when the authenticator is included as a cookie in the
initial reply of the server.
Dynamic passwords: These passwords are only used once and they are
more secure, but less convenient. A list of independent random one-time
passwords can be issued to users as a dynamic password. As this is very
difficult to learn by heart, users will be forced to keep this list somewhere
either on paper, or worse, on a file on their machine that makes it inconvenient
to use and prone to threat.
Hardware tokens: These can be used with several of the previously
discussed user authentication mechanisms. The users can easily carry these
hardware tokens with them and can use them on different machines.
Hardware tokens offer more mobility than software tokens that are installed
on a particular machine. Hardware tokens display the current time interval
or a response to an error. The site should issue a new token to all users and
all tokens should contain a different cryptographic key as using the same
token is not at all safe. There are two types of cryptographic keys, public-
key encryption and symmetric keys. Using public-key encryption is safer
than using symmetric keys. The symmetric keys require maintenance of a
secure database containing all the secret keys, which the site shares with its
users. Therefore, the symmetric keys are often cryptographically derived
Self-Instructional
Material 265
Internet and Its Uses from a unique serial number of the token and a master key, which is the
same for all the tokens. In this way, each user shares a different key with the
site, without having the problem of the secure database.
NOTES
14.3 USES OF INTERNET
the inventory package. Since the computer knows all the ins and outs for each
item, it can track the exact quantity in hand for each. The package also generates
reports for all the fresh stocks that need to be procured (based upon the levels
specified). A variety of other useful MIS reports like aging analysis, goods movement NOTES
analysis, slow and fast moving stock report, valuation report, etc. can also be
generated which assist the store keeper and accountants.
Some of the more sophisticated inventory packages (or inventory modules
of ERP packages like Oracle financials, Baan, SAP, etc.) automatically generate
purchase orders (as soon as the minimum level of any item is reached), provide
automatic posting of accounting entries (as soon as any purchase or sale is carried
out) and generate analytical reports which show the previous and future trends in
inventory consumption.
Some interesting innovations in usage of IT for better inventory management
are as follows:
Bar coding system: Bar coding is a technique which allows data to be
encoded in the form of a series of parallel and adjacent bars and spaces
which represent a string of characters. A bar code printer encodes any
data into these spaces and bars and then a bar code reader is used to
decode the bar codes by scanning a source of light across the bar code
and measuring the light’s intensity that is reflected back by the white
spaces. Bar coding provides an excellent and fast method for identifying
items, their batch numbers, expiry dates, etc. without having to manually
type or read the data.
following places:
Public access areas: Shopping malls, holiday resorts, cinema halls,
etc. use information kiosks with graphics and audio prompts that assist
NOTES
customers in accessing information about the desired products, services
and frequently asked questions (FAQs) about availability, price,
attributes, etc.
Public utilities: One of the early users of information kiosks were public
utility organizations (in the US and Europe). Most public companies
receive enormous amount of requests for information about their services,
lodging complaints, application status, etc. Instead of employing an army
of front office staff (and taking on the additional hassles of their constant
training and ensuring that their motivation is at high levels), most
organizations opted for the information kiosks to provide hassle-free,
round, the clock service to their customers. Information kiosks reduce
personnel cost as well as the need for vast office space and costly support
equipment.
Web kiosks: Although the early usage of information kiosks was limited
largely to static information (brochures, technical information and
collaterals), information kiosks are being increasingly used to provide
database driven, online information. For instance, information kiosks at
the New York airport are linked to all the major hotels in the city and
any traveller can do an online booking after confirming room availability.
14.4 VIRUS
System threat especially virus attempts on boot sector to modify the interrupt
vectors and system files. For example, virus activity frequently calls I/O operations
if system unit is infected. A virus is detected by searching the pattern of viruses, for
example running any code by examining the records and files. Viruses spread from
machine to machine and across network in a number of ways. The viruses are
always trying to trigger and execute the malicious programs that intently spread
the computer system. For example, a macro virus is booted with infected disk and
spread the viruses to boot sector. Then, it started to share the network drive or
other media that exchanges infected files across Internet by downloading files and
attachments from Internet. The transmission of viruses is possible by following
ways:
If system unit is booted from infected files.
If programs are executed with an infected programs.
If the virus infiltration includes common routes, floppy disks and e-mail
attachments.
If pirated software and shareware are used in the system files.
Self-Instructional
Material 275
Internet and Its Uses Viruses are so malicious and smart even users are not able to realize whether
their system unit is infected with viruses or not. The property of viruses is that they
hide themselves among regular system files or frequently attachments and
camouflage as they are standard files. The following steps are preferably taken if
NOTES system gets infected with viruses:
The golden rule must be followed preventing from data destruction that
helps users to avoid unnecessary stress.
The Internet facility and Local Area Network (LAN) utilities must be
disconnected for the time being.
Two operating systems must be installed within system unit if system is
not booted properly. If the system does not recognize the hard drive it
means it is infected with virus. It is better to boot from Windows rescue
disk. The partition table of scandisk must be recovered using scandisk
for standard Windows program.
Back up of all important and critical data must be taken as regular interval
in external devices, such as Compact Disk (CD), Universal Serial Bus
(USB), floppy disks or even flash memory, etc.
Antivirus software must be installed in the system and it is needed to run
after weekend or month end. The good antivirus disinfects infected
objects, quarantine infected objects and able to delete Trojans and
worms.
The latest updates must be taken for removing the antivirus databases.
The infected computer is not included to download the updates.
Scans and disinfects the mails of client’s databases that ensure the
malicious programs are not reactivated if messages are sent from one to
other across network.
Firewall security features must be installed in the system that prevents from
malicious and foreign programs. The corrupted applications and files must be
cleaned and deleted. User needs to try to reinstall the required applications in the
system and should be sure that the corresponding software is not pirated.
Many experts from industry and academic spheres have made many views on
Cloud and its computing features. Some of them are given below:
Buyya et al. have given the opinion about the cloud as follows: “Cloud is a
parallel and distributed computing system consisting of a collection of inter-
connected and virtualized computers that are dynamically provisioned and
presented as one or more unified computing resources based on service-level
agreements (SLA) established through negotiation between the service
provider and consumers.”
Self-Instructional
276 Material
A McKinsey and Co. have stated that “Clouds are hardware-based services Internet and Its Uses
There are various platforms and technologies available for cloud computing. They
come under any of the three major models- Infrastructure-as-a-Service, Platform-
as-a-Service, and Software-as-a-Service. The development of a cloud computing
application happens by properly identifying and using platforms and frameworks
that provide different types of services, from the infrastructure to customizable
applications serving specific purposes. Some of the cloud computing platforms
and the technologies they provide in seamless development, deployment and
maintenance of the cloud applications.
Amazon web services (AWS)
AWS provides cloud IaaS services for an entire computing stack right from virtual
computational base, data and application storage, to networking. AWS is a solution
for a complete computing package as a service. AWS is popularly known for its
Self-Instructional
Material 277
Internet and Its Uses compute and storage-on-demand services, namely Elastic Compute Cloud (EC2)
and Simple Storage Service (S3).
EC2 is an IaaS, it provides the consumers with customizable virtual hardware
that they can utilise as a base infrastructure for deploying computing systems of
NOTES
their choice on the cloud by making selections from a large variety of virtual
hardware configurations, Graphical Processing Units and cluster instances. EC2
instances can be deployed in two ways one by using the AWS console, a
comprehensive Web portal for accessing AWS services, or by using the Web
services API available for several programming languages. EC2 can be used to
save a specific running instance as an image, thereby permitting users to create
their own templates for deploying systems. S3 can be used to store these templates,
which delivers on demand persistent storage. The organization of S3 is in terms of
buckets; buckets are the containers that contain objects stored in binary form with
enriching attributes for further enhancements. Objects of any size can be stored –
which can be anything from simple files to complete disk images, and these objects
can be accessible from everywhere. Besides EC2 and S3, AWS provides a wide
range of services that can be used by a client to build a complete virtual computing
system with additional services like robust networking support, database (relational
and not) support, etc.
Google AppEngine
Google AppEngine, a proprietary cloud implementation of Google, is basically a
scalable runtime environment that is used for executing Web applications. It
dynamically utilises the global computing infrastructure of Google to cope up with
varying need of the customers. AppEngine provides both a secure execution
environment and a collection of services (like in-memory caching, scalable data
store, job queues, messaging, and cron tasks, etc.) that just ease the development
scalable Web applications. Developers are provided with AppEngine software
development kit (SDK) to build and test applications on their own machines.
Once an application is developed completely, developers can then easily migrate
their application to AppEngine, set quotas to contain the costs generated, and
make the application available to the world. The languages that are currently
supported in AppEngine software development kit (SDK), are Python, Java, and
Go.
Microsoft Azure
Microsoft proprietary cloud implementation is MS Azure. It is a cloud operating
system and a platform that provides a scalable runtime environment for developing
and hosting Web applications. MS Azure supports different roles to identify
distribution units for applications and embody the application’s logic. As of now
three types of role: Web role, worker role, and virtual machine role. The Web role
is meant to host a Web application. The worker role is to act as a container of
applications and perform workload processing as well. The virtual machine role is
Self-Instructional
278 Material
to provide a virtual environment which may even incorporate an operating system. Internet and Its Uses
Besides role implementations, MS Azure also provides some extra services like
data storage, networking, caching, etc. to support application execution.
Hadoop Apache NOTES
Hadoop apache is an open source framework funded and managed by Yahoo.
Hadoop is basically used to process large data sets and is regarded as an
implementation of Map Reduce programming model. Map Reduce is an application
programming model developed by Google that performs two basic operations for
data processing - map and reduce. Map function transforms and synthesizes the
input data provided by the user; while reduce aggregates the output obtained by
the map operations performed over the large data sets. Using Hadoop eases the
task by providing the runtime environment, where in the developers just have to
provide the input data and specify the map and reduce functions that are to be
executed without worrying about any internal compositions of the algorithms
implemented there.
Force.com and Salesforce.com
Force.com is a cloud computing platform for developing social enterprise
applications. The platform is the basis for SalesForce.com, a Software-as-a-Service
solution for customer relationship management. Force.com provides functionality
of composing ready-to-use blocks that can be used by the developers to create
applications by composing ready-to-use blocks; a complete set of components
supporting all the activities of an enterprise are available. It is also possible to
develop your own components or integrate those available in AppExchange into
your applications. The platform provides complete support for developing
applications, from the design of the data layout to the definition of business rules
and workflows and the definition of the user interface. The Force.com platform is
completely hosted on the cloud and provides complete access to its functionalities
and those implemented in the hosted applications through Web services technologies.
Manjrasoft Aneka
Manjrasoft Aneka is a cloud application platform meant for rapid development
and deployment of scalable applications on various types of clouds. It supports a
number of programming abstractions and runtime environment for developing
applications that can be deployed on heterogeneous hardware be it clusters,
networked desktop computers, and/or other cloud resources. Developers can
opt from different abstractions to design their application: tasks, distributed threads,
and map-reduce. These applications are then executed on the distributed service-
oriented runtime environment, which can dynamically integrate additional resource
on demand. The service-oriented architecture of the runtime provides a great degree
of flexibility and also simplifies the integration of new features, such as abstraction
of a new programming model and associated execution management environment.
Self-Instructional
Material 279
Internet and Its Uses Apart from these, there are services that manage almost all the activities happening
during runtime from scheduling, execution, accounting, billing, storage, and checking
the quality of service.
Self-Instructional
280 Material
communicate by means of the shared memory with each other without hampering Internet and Its Uses
Self-Instructional
Material 281
Internet and Its Uses Approaches to Parallel Computing
There are many parallel programming approaches available. Some of them are
discussed below:
NOTES Data parallelism: In the case of data parallelism, the divide-and-conquer
technique is implemented to divide the data set into multiple sets, and each
data set is processed on different PEs using the same instruction. This
approach is best suited to processing on SIMD machines.
Process parallelism: In the case of process parallelism, a given operation
has multiple but unique/distinct activities that can be processed on multiple
processors.
Farmer-and-worker model: In farmer- and-worker model, a justified job
distribution is done. Amongst all, one processor acts as master and all other
remaining PEs are designated as slaves. The one designated as the master
assigns jobs to the slave PEs and, on completion, these slave PEs inform
the master. The master then collects results finally.
Levels of parallelism
Levels of parallelism are defined on the basis of the lumps of code (grain size).The
grain size or lump of code is a possible candidate for parallelism. The categories of
code granularity for parallelism can be large (task level), medium (control level),
fine (data level), and Very fine grain (multiple-instruction issue) etc. But whatever
be the size of the grain, all have a common goal: to increase processor efficiency
by hiding latency.
Elements of Distributed Computing
Distributed computing deals with the study and application of the models,
architectures, and algorithms used for building and managing distributed systems.
Tanenbaum et. al defines distributed computing as ‘A distributed system is a
collection of independent computers that appears to its users as a single coherent
system.’ Distributed systems are composed of more than one computer that
collaborate together, it is necessary to provide some sort of data and information
exchange between them, which generally occurs through the network.
Coulouris et al. defined the distributed systems as: ‘A distributed system is
one in which components located at networked computers communicate and
coordinate their actions only by passing messages.’As specified in this definition,
the components of a distributed system communicate with each other with some
sort of message passing mechanism. This is a term that comprehends several
communication models.
Self-Instructional
282 Material
Internet and Its Uses
NOTES
that are specifically required in development or deployment. And also, client just
need to pay only for what he uses and only for the time he has used. It is very
economic for client and also, he does not have to take any care of the infrastructure
maintenance except paying up for the use. As a result, customers can achieve a NOTES
much faster service delivery with less cost. Examples are GoGrid, Flexiscale,
Layered Technologies, Joyent and Rackspace.
Cloud Reference Model
Cloud computing provides hardware and software infrastructure, development
platforms, application as services that are consumed as a utility and are delivered
to the consumer over the internet. In computation all the above mentioned are
interrelated with application being the basic entity. An application is developed
over certain platforms utilising the hardware and software infrastructure. A stack
can be seen if all these are placed as according to their utilization. So, there should
be some model that can give a glimpse of such an organisation into a layered view
covering the entire stack as given in the Figure 14.4, from hardware to software
systems.
Knowing about cloud computing is more or less a hectic task. Since, with new
technological evolution one can never predict what will be provided as a service
Self-Instructional
286 Material
and what will be the cloud base. But to start with you can distinguish the realm of Internet and Its Uses
cloud computing into two distinct sets:
Deployment Models
A cloud model set that is based on the entities-location and management of the NOTES
cloud’s infrastructure. There are four deployment models as discussed below:
1. Public clouds: Public clouds are the very first cloud service offering. They
are boundary less clouds as the services offered over these public cloud is
easily accessible and available to anyone, from anywhere, and at any time
via Internet. Structurally, public clouds can be referred to as a distributed
system, composed of one or more datacentres connected, on top of which
certain computational services are offered by the cloud. To access the
services offered, any customer can easily sign in with the cloud provider,
enter login credential and billing details, and use the services offered. Public
clouds are meant to support a large quantity of users. They can scale
horizontally or vertically as there is the demand and sustain peak loads.
They offer solutions for minimizing IT infrastructure costs and serve as a
best possible option for handling peak loads on the local infrastructure.
2. Private Cloud: Private clouds are virtual distributed systems that rely on a
private infrastructure and provide internal users with dynamic provisioning
of computing resources. Instead of a pay-as-you-go model as in public
clouds, there could be other schemes in place, taking into account the usage
of the cloud and proportionally billing the different departments or sections
of an enterprise. Private clouds have the advantage of keeping the core
business operations in-house by relying on the existing IT infrastructure and
reducing the burden of maintaining it once the cloud has been set up. In this
scenario, security concerns are less critical, since sensitive information does
not flow out of the private infrastructure. Moreover, existing IT resources
can be better utilized because the private cloud can provide services to a
different range of users.
3. Hybrid Clouds: Public clouds are the typically meant for serving the need
of a large pool of customers as these accommodate large software and
hardware infrastructures. But these clouds have a severe drawback –
security threats and administrative problems. If you want a large infrastructure
to use and you are cost concerned but do not bother about security issues,
then public clouds are for you.
Private clouds are primarily meant for those who wish to securely keep the
processing of information within an enterprise’s premises or want to utilize
the existing hardware and software infrastructure. But here is also one
problem, you can secure the information and utilize your resources but when
you will have to address the increase in demand for resources during the
peak hours, private cloud will not work here. You will require the public
clouds. Hence, you will have to create something with a little of both-public
and the private clouds considering the advantage of the best of the private
and public worlds. This led to the development of the hybrid clouds. Self-Instructional
Material 287
Internet and Its Uses Hybrid clouds contain attributes of both the public and the private clouds.
Like private clouds, these hybrid clouds allow enterprises to exploit existing
IT infrastructures, maintain sensitive information within the premises, and
like public clouds sustain the peak load by growing and shrinking the external
NOTES resources and releasing them when they’re no longer required. So, a hybrid
cloud can be defined as a heterogeneous distributed system resulting from
a private cloud that integrates additional services or resources from one or
more public clouds. Due to this, they are also called heterogeneous clouds.
4. Community Clouds: IT industry is too dynamic to cope up with its
requirement with a single cloud implementation. So, community clouds are
required to sustain the need of an industry, a community etc. As a community
is created with a large number of people with diverse skills, interest, etc., a
community cloud can be created by the integration of varied services of
different clouds to address the needs of an industry, a community, and the
business sector. The National Institute of Standards and Technologies
(NIST) characterizes community clouds as follows: The infrastructure is
shared by several organizations and supports a specific community that has
shared concerns (e.g., mission, security requirements, policy, and compliance
considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.
Figure 14.5 provides a general view of the usage scenario of community
clouds, together with reference architecture. A community cloud is
different from the public cloud in the sense that it serves just the
community of users (government bodies, industries, or even simple users)
with more or less similar needs unlike public clouds that serves a
multitude of the users with varying need. Community clouds are also
different from private clouds, as a community cloud is implemented over
multiple administrative domains (government bodies, private enterprises,
research organizations, and even public virtual infrastructure providers)
unlike the private cloud where the services are generally delivered within
the institution that owns the cloud.
14.10 SUMMARY
Self-Instructional
Material 289
Internet and Its Uses Public private hybrid and community clouds are the types of deployment
models.
NOTES
14.11 KEY WORDS
Self-Instructional
290 Material