You are on page 1of 166

qwertyuiopasdfghjklzxcvbnmqwertyui

opasdfghjklzxcvbnmqwertyuiopasdfgh
jklzxcvbnmqwertyuiopasdfghjklzxcvb
nmqwertyuiopasdfghjklzxcvbnmqwer
DATA COMMUNICATION AND
COMPUTER NETWORKING
tyuiopasdfghjklzxcvbnmqwertyuiopas
Masters of Finance and Control

dfghjklzxcvbnmqwertyuiopasdfghjklzx
Amity University

cvbnmqwertyuiopasdfghjklzxcvbnmq
Semester IV

wertyuiopasdfghjklzxcvbnmqwertyuio
pasdfghjklzxcvbnmqwertyuiopasdfghj
klzxcvbnmqwertyuiopasdfghjklzxcvbn
mqwertyuiopasdfghjklzxcvbnmqwerty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbnmrty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbnmqw
ertyuiopasdfghjklzxcvbnmqwertyuiop
Preface

Today‘s networks have evolved from simple dial up systems with


broad access to contents (the Internet) in to full featured, service oriented
system. Networks that originated under a simple internet access business
plan have found customer demand for more resource intensive applications
increasing. Additionally, the promise of high speed broadband access has
raised the expectations of the average customers, networks must not only
provide access to applications and services, but these functions must be
easily accessible, uniform between subscribers and predictable in terms of
behavior. Every possible efforts have been made in this study material to
include all latest technologies.
It is a technical and practical subject and learning of it means
familiarizing oneself with many new terms. New, faster networking
technologies are redefining LAN environments with high speed topologies
that offer data throughput of at least 100 Megabits per second. As the
Student‘s Study Material is intended to serve the beginners in the field, I
have given it the quality of simplicity. This Study Material is intended to
serve as a Study Material for students of MFC course of Amity University.
This Study Material is ‗student oriented‘ and written in teach yourself style.
The primary objective of this study material is to facilitate clear
understanding of the subject of ―Data Communication and Computer
Networking‖. This Material contains a wide range of theoretical and
practical questions varying in content, length and complexity. Most of the
questions have been taken from the various university examinations. This

2
material contains a sufficiently material to assist better grasp and
understanding of the subject. The reader will find perfect accuracy with
regard to answers of the exercise questions. For the convenience of the
students I have also included multiple questions and case study in this Study
Material for better understanding of the subject.
I hope that this Material will prove useful to both students and
teachers. The contents of this Study Material are divided into five chapters
covering various aspects of the syllabus of MFC and other related courses.
At the end of this Material three assignments have been provided which are
related with the subject matter.
I have taken considerable amount of help from various literatures,
journals and medias. I express my gratitude to all those personalities who
have devoted their life to knowledge, from whom I could learn and on the
basis of those learning now, I am trying to deliver my knowledge to others
through this material.
It is by God‘s loving grace that he brought me in to this world and
blessed me with loving and caring parents, my respected father Mr. Manohar
Lal Arora and my loving mother Mrs. Kamla Arora, who have supported me
in this Study Material.
Words may not be enough for me to express my deep sense of
gratitude and indebtedness to Dr. Shipra Maitra, Director (Amity College of
Commerce & Finance, Amity University, Noida) for the benevolent
guidance, constructive criticism and constant encouragement throughout the
period I have been involved in this Study Material.
I am thankful to my beloved wife Mrs. Deepti Arora, without whose
constant encouragement, advice and material sacrifice, this achievement
would have been a far of dream.

3
Index

S. No. Chapter No. Subject Page No.

1. 1 Introduction to Data transmission


2. 1.1 Introduction 1
3. 1.2 Application and history of data transmission 2
4. 1.3 Distinction between data transmission and others 3
5. 1.4 Data transmission modes 4
6. 1.5 Other modes of data transfer 7
7. 1.5.1 Serial transmission
8. 1.5.2 Parallel transmission
9. 1.6 Merits and demerits of parallel data transmission 10
over serial transmission
10. 1.7 Data transmission medium & channels 10
11. 1.7.1 Transmission channel
12. 1.7.2 Basics of electromagnetic waves
13. 1.7.3 Types of physical media
14. 1.8 Interference in data transmission 13
15. 1.8.1 White noise
16. 1.8.2 Impulsive noises
17. 1.9 Bandwidth and capacity 14
18. 1.10 Analog and digital signals 15
19. 1.10.1 Analog transmission
20. 1.10.2 Digital transmission
21. 1.10.3 Translating information
22. 1.10.4 Principles of analog transmission
23. 1.10.5 Analog transmission of analog data
24. 1.10.6 Analog transmission of digital data
25. 1.11 Signal encoding 21
26. 1.11.1 NRZ encoding
27. 1.11.2 NRZI encoding
28. 1.11.3 Manchester encoding
29. 1.11.4 Delay encoding
30. 1.11.5 Bipolar encoding
31. 1.12 Asynchronous and synchronous transmission 25
End Chapter Quizzes 28

4
32. 2 Data Transmission media
33. 2.1 Introduction 31
34. 2.2 Coaxial cable 31
35. 2.3 Twisted pair cabling 35
36. 2.3.1 Unshielded twisted pair
37. 2.3.2 Shielded twisted pair
38. 2.4 Fiber optics 38
39. 2.5 Multiplexing 39
40. 2.5.1 Introduction
41. 2.5.2 Types of multiplexing
42. 2.5.2.1Frequency division multiplexing
43. 2.5.2.2Time division multiplexing
44. 2.5.2.3Statistical multiplexing
45. 2.6 Types of data communication 40
46. 2.6.1 Simplex communication
47. 2.6.2 Half duplex communication
48. 2.6.3 Full duplex communication
49. 2.7 Emulation of full duplex in 45
shared physical media
2.7.1 Time division duplexing
2.7.2 Frequency division duplexing
50. 2.7.3 Echo cancellation
End Chapter Quizzes 48

51. 3 Data networks


52. 3.1 Introduction 51
53. 3.2 History of networks 52
54. 3.3 Network classification 53
55. 3.3.1 Connection method
56. 3.3.2 Scale
57. 3.3.3 Functional relationship
58. 3.3.4 Network topology
59. 3.4 Types of networks 55
60. 3.4.1 Personal area network
61. 3.4.2 Local area network
62. 3.4.3 Campus area network
63. 3.4.4 Metropolitan area network
64. 3.4.5 Wide area network

5
65. 3.4.6 Wireless network
66. 3.4.6.1Types of wireless networks
67. 3.4.7 Mobile devices networks
68. 3.4.7.1Uses of mobile devices networks
69. 3.4.8 Global area networks
70. 3.4.9 Virtual private network
71. 3.4.10 Internetwork
72. 3.5 Views of network 67
73. 3.6 Network topology 68
74. 3.6.1 Bus
75. 3.6.2 Ring
76. 3.6.3 Star
77. 3.6.4 Mesh
78. 3.7 Switching 81
79. 3.7.1 Circuit switching
80. 3.7.2 Packet switching
81. 3.7.3 History of packet switching
82. 3.7.4 Connectionless and connection oriented packet switching
83. 3.7.5 Packet switching in networks
End Chapter Quizzes 91

84. 4 Internet and internet protocols


85. 4.1 Introduction 94
86. 4.2 Terminology 95
87. 4.3 History of internet 95
88. 4.4 Growth of internet 98
89. 4.5 Today‘s internet 100
90. 4.6 Internet structure 101
91. 4.7 Language used on internet 102
92. 4.8 Uses of internet 104
93. 4.8.1 Why do people use internet
94. 4.8.2 Why do people put things on internet
95. 4.8.3 E mail
96. 4.8.4 The World wide web
97. 4.8.5 Remote access
98. 4.8.6 Collaboration
99. 4.8.7 File sharing
100. 4.8.8 Streaming media
101. 4.8.9 Internet telephony
102. 4.8.10 Political organization and censorship

6
103. 4.8.11 Leisure activities
104. 4.8.12 Marketing
105. 4.9 Internet access 121
106. 4.10 Social impact of internet 123
107. 4.11 Complex architecture 124
108. 4.12 Internet protocols 125
109. 4.12.1 Internet protocol suite
110. 4.12.2 History of internet protocol
111. 4.12.3 Layers in internet protocol suits
112. 4.12.4 Implementations
End Chapter Quizzes 136

113. 5 Multimedia
114. 5.1 Introduction 139
115. 5.2 History of multimedia 141
116. 5.3 Major features of multimedia 142
117. 5.4 Multimedia applications 143
118. 5.5 Multimedia and the future 144
119. 5.6 Categorization of multimedia 146
120. 5.7 Uses of multimedia 146
121. 5.7.1 Creative industries
122. 5.7.2 Commercial
123. 5.7.3 Entertainments and fine arts
124. 5.7.4 Education
125. 5.7.5 Engineering
126. 5.7.6 Industry
127. 5.7.7 Mathematical and scientific research
128. 5.7.8 Medicine
129. 5.7.9 Miscellaneous
130. 5.8 Structuring information in a multimedia form 152
End Chapter Quizzes 154

Answer key to End Chapter Quizzes 157


Bibliography 158
Assignment A 159
Assignment B 159
Assignment C 161
Answer key to Assignment C 171

7
CHAPTER ONE

INTRODUCTION TO DATA TRANSMISSION


1.1 Introduction
Data transmission is the transfer of data from point-to-point often
represented as an electro-magnetic signal over a physical point-to-point or
point-to-multipoint communication channel. Examples of such channels are
copper wires, optical fibers, wireless communication channels, and storage
media. Although data transferred may be exclusively analog signals, in
modern times, transferred data is most often be a digital bit stream that may
origin from a digital information source, for example a computer or a
keyboard, or from a digitized analog signal, for example an audio or video
signal.

Data transmitted may be analog or digital (i.e. digital bit stream) and
modulated by means of either analog modulation or digital modulation using
line coding. The concept of digital communication is typically associated
with digital representation of analog signals, including source coding and
Pulse-code modulation, but that may also be covered in a textbook on data
transmission.

Data transmission is a subset of the field of data communications,


which also includes computer networking or computer communication
applications and networking protocols, for example routing, switching and
process-to-process communication.

8
1.2 Applications and history of data transmission
Data (mainly but not exclusively informational) has been sent via non-
electronic (e.g. optical, acoustic, mechanical) means since the advent of
communication. Analog signal data has been sent electronically since the
advent of the telephone. However, the first data electromagnetic
transmission applications in modern time were telegraphy (1809) and
teletypewriters (1906), which are both digital signals. The fundamental
theoretical work in data transmission and information theory by Harry
Nyquist, Ralph Hartley, Claude Shannon and others during the early 20th
century, was done with these applications in mind.

Data transmission is utilized in computers in computer buses and for


communication with peripheral equipment via parallel ports and serial ports
such us RS-232 (1969), Firewire (1995) and USB (1996). The principles of
data transmission is also utilized in storage media for Error detection and
correction since 1951. Data transmission is utilized in computer networking
equipment such as modems (1940), local area networks (LAN) adapters
(1964), repeaters, hubs, microwave links, wireless network access points
(1997), etc.

In telephone networks, digital communication is utilized for


transferring many phone calls over the same copper cable or fiber cable by
means of Pulse code modulation (PCM), i.e. sampling and digitization, in
combination with Time division multiplexing (TDM) (1962). Telephone
exchanges have become digital and software controlled, facilitating many
value added services. For example the first AXE telephone exchange was
presented in 1976. Since late 1980th, digital communication to the end user

9
has been possible using Integrated Services Digital Network (ISDN)
services. Since the end of 1990th, broadband access techniques such as
ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-home
(FTTH) have become wide spread to small offices and homes. The current
tendency is to replace traditional telecommunication services by packet
mode communication such as IP telephony and IPTV.

The digital revolution has also resulted in many digital


telecommunication applications where the principles of data transmission
are applied. Examples are second-generation (1991) and later cellular
telephony, video conferencing, digital TV (1998), digital radio (1999),
telemetry, etc.

1.3 Distinction between data transmission and others

Data transmission is a subset of the field of data communications,


which also includes computer networking or computer communication
applications and networking protocols, for example routing, switching and
process-to-process communication. Courses and literature in computer
networking and data communications typically also deal with the other
protocol layers in the seven layer OSI model than the above. Analog
modulation schemes such as AM and FM are used for tranferring analog
message signals over analog passband channels, and are not covered by the
field of data transmission, as seen in the above mentioned references, but by
tele-transmission.

Depending on definition, the concept of Digital data transmission


often implies that the data is transferred as a digital signal over digital

10
baseband channel, for example a serial cable or fiber optics, by means of a
line coding method, for example Manchester coding. This results in a pulse
amplitude modulated signal, also known as a pulse train. Analog data
transmission implies that the data is transferred over an analog passband
channel, for example a filtered telephone access network cupper wire or a
wireless channel, by means of some digital modulation scheme such as PSK,
FSK or ASK. Note that the latter sometimes is considered as a digital signal,
sometimes as an analog signal, depending on how digital signal is defined.
Also note that some textbooks on Digital data transmission, for example,
cover both digital and analog data transmission schemes as defined here, i.e.
both line coding and digital modulation schemes.

Textbooks and courses on "digital communication" and "data


transmission" have similar content. If there is a difference, the concept of
digital communication may firstly be associated with digital representation
of analog signals, including source coding and Pulse-code modulation.

1.4 Data transmission modes


A given transmission on a communications channel between two
machines can occur in several different ways. The transmission is
characterised by:

the direction of the exchanges


the transmission mode: the number of bits sent simultaneously
synchronisation between the transmitter and receiver.

11
There are 3 different transmission modes characterized according to the
direction of the exchanges:

1.4.1 Simplex

1.4.2 Half duplex

1.4.3 Full duplex

1.4.1 Simplex

It is a connection in which the data flows in only one direction, from


the transmitter to the receiver. This type of connection is useful if the data do
not need to flow in both directions (for example, from your computer to the
printer or from the mouse to your computer...).

1.4.2 Half-duplex
It is (sometimes called an alternating connection or semi-duplex) is a
connection in which the data flows in one direction or the other, but not both
at the same time. With this type of connection, each end of the connection

12
transmits in turn. This type of connection makes it possible to have
bidirectional communications using the full capacity of the line.

1.4.3 Full-duplex connection

It is a connection in which the data flow in both directions


simultaneously. Each end of the line can thus transmit and receive at the
same time, which means that the bandwidth is divided in two for each
direction of data transmission if the same transmission medium is used for
both directions of transmission.

13
1.5 Other modes of data transfer

There are few other modes of data transmission, which are as follows:

1.5.1 Serial transmission

In a serial connection, the data are sent one bit at a time over the
transmission channel. However, since most processors process data in
parallel, the transmitter needs to transform incoming parallel data into serial
data and the receiver needs to do the opposite.

These operations are performed by a communications controller (normally a


UART (Universal Asynchronous Receiver Transmitter) chip). The
communications controller works in the following manner:

The parallel-serial transformation is performed using a shift


register. The shift register, working together with a clock, will shift
the register (containing all of the data presented in parallel) by one
position to the left, and then transmit the most significant bit (the
leftmost one) and so on:

14
The serial-parallel transformation is done in almost the same way
using a shift register. The shift register shifts the register by one position to
the left each time a bit is received, and then transmits the entire register in
parallel when it is full:

In telecommunications, serial transmission is the sequential


transmission of signal elements of a group representing a character or other
entity of data. Digital serial transmissions are bits sent over a single wire,
frequency or optical path sequentially. Because it requires less signal
processing and less chances for error than parallel transmission, the transfer
rate of each individual path may be faster. This can be used over longer
distances as a check digit or parity bit can be sent along it easily.

1.5.2 Parallel transmission


Parallel connection means simultaneous transmission of N bits. These
bits are sent simultaneously over N different channels (a channel being, for
example, a wire, a cable or any other physical medium). The Parallel
connection on PC-type computers generally requires 10 wires.

15
These channels may be:

N physical lines: in which case each bit is sent on a physical line


(which is why parallel cables are made up of several wires in a ribbon cable)
one physical line divided into several sub-channels by dividing up the
bandwidth. In this case, each bit is sent at a different frequency...

In telecommunications, parallel transmission is the simultaneous


transmission of the signal elements of a character or other entity of data. In
digital communications, parallel transmission is the simultaneous
transmission of related signal elements over two or more separate paths.
Multiple electrical wires are used which can transmit multiple bits
simultaneously, which allows for higher data transfer rates than can be
achieved with serial transmission. This method is used internally within the
computer, for example the internal buses, and sometimes externally for such
things as printers, The major issue with this is "skewing" because the wires
in parallel data transmission have slightly different properties (not
intentionally) so some bits may arrive before others, which may corrupt the
message. A parity bit can help to reduce this. However, electrical wire
parallel data transmission is therefore less reliable for long distances because
corrupt transmissions are far more likely.

16
1.6 Merits and demerits of parallel data transmission over
serial transmission

1.6.1 Merits

1. The biggest advantage of parallel data transmission is speed. Since


data bits are transmitted simultaneously down parallel wires instead of
sequentially down a single wire, parallel data transmission is nine times
faster than serial data transmission.

1.6.2 Demerits
1. The difficulty of readying and simultaneously releasing nine bits at
their starting gates.
2. The problem of assuring the concurrent arrival of all transmitted bits
at their receiving gates for proper interpretation.
3. The cost of expensive cable instead of twisted – pair wires for
connections to external devices.

1.7 Data transmission medium & channels

In order for data transmission to occur, there must be a transmission


line, also called transmission channel or channel, between the two machines.
These transmission channels are made up of several segments that allow the
data to circulate in the form of electromagnetic, electrical, light or even
acoustic waves. So, in fact, it is a vibratory phenomenon that is propagated
over the physical medium.

17
1.7.1 Transmission Channel

A transmission line is a connection between two machines. The term


transmitter generally refers to the machine that sends the data while
receiver refers to the one receiving the data. The machines can sometimes
be both receivers and transmitters (this is generally the case with computers
connected to a network).

A transmission line, also sometimes called a transmission channel,


does not necessarily consist of a single physical medium, which is why the
end machines (as opposed to the intermediary machines), called DTE, (Data
Terminal Equipment) each have equipment for the physical medium to
which they are connected called DCTE (Data Circuit Terminating
Equipment) or DCE (Data Communication Equipment). The term data
circuit refers to the assembly consisting of the DTCE of each machine and
the data line.

18
1.7.2 The basics of electromagnetic waves

Data is transmitted on a physical medium by propagation of a


vibratory phenomenon. An undulating signal results from this depending on
the physical quantity that is being varied:

in the case of light, it is a light wave


in the case of sound, it is a sound wave
in the case of voltage or amperage of an electric current, it is an
electrical wave

Electromagnetic waves are characterised by their frequency, their


amplitude and their phase.

19
1.7.3 Types of physical media

The physical transmission media are the elements that allow


information to flow between transmission devices. These media are
generally divided into three categories, according to the type of physical
quantity that they allow to circulate, and therefore according to their
physical composition, which are as follows:

1.7.3.1 Wire Media : It allows an electrical quantity to circulate on a


cable that is generally metallic.

1.7.3.2. Aerial media : It refers to the air or a vacuum which allow the
circulation of electromagnetic waves and various types of radio-electric
waves.

1.7.3.3 Optical media : It allows information to be sent in the form of


light.

The speed of the physical quantity will vary depending on the


physical medium (for example, sound propagates through the air at a speed
of on the order of 300 m/s whereas the speed of light is close to 300,000
km/s).

1.8 Interference in data transmission

Data transmission on a line is not lossless. First of all, the


transmission time is not immediate, which requires a certain
"synchronisation" of the data on reception. In addition, interference or signal
degradation can occur.

20
Interference (often called noise) refers to any perturbations that locally
modify the signal form. Generally, there are of the following types:

1.8.1 White noise: It is a uniform perturbation of the signal, in other


words, it adds a small amplitude to the signal the average effect of which on
the signal is nil. White noise is generally characterised by a ratio called the
signal/noise ratio, which translates the percentage of amplitude of the
symbol, with respect to the noise (its unit is the decibel). It should be as high
as possible.

1.8.2 Impulsive noises: They are small peaks of intensity causing


transmission errors.

Signal line loss or attenuation represents the loss of signal


through energy dissipated in the line. Attenuation results in an output signal
that is weaker than the input signal and is characterised by the formula:

A = 20 log (Output signal level / Input signal level)

Attenuation is proportional to the length of the transmission channel and the


frequency of the signal.

Signal distortion characterises the phase difference between the


input signal and the output signal.

1.9 Bandwidth and capacity

The bandwidth of a transmission channel is the frequency interval


over which the signal does not experience line loss greater than a certain
value (generally 3 dB, as 3 decibels corresponds to a signal loss of 50%):

21
A telephone line, for example, has a bandwidth of between 300 and 3400
Hertz approximately for an attenuation rate of 3 dB.

The capacity of a channel is the amount of information (in bits) that


can be transmitted on the channel in 1 second.
Capacity is characterised by the following formula:

C = W log2 (1 + S/N)

C capacity (en bps)


W bandwidth (in Hz)
S/N represents the signal-to-
noise ratio of the channel.

1.10 Analog & Digital Signals


In general, there are two types of telecommunication transmissions,
which are

1.10.1 Analog transmission and

1.10.2 Digital transmission.

22
1.10.1 Analog Transmission

Analog transmission uses signals that are exact replicas of a sound


wave or picture being transmitted. Signals of varying frequency or amplitude
are added to carrier waves with a given frequency of electromagnetic current
to produce a continuous electric wave. The term "analog signal" came about
because the variations in the carrier waves are similar, or analogous, to that
of the voice itself.

For example, in analog transmission, say a telephone system, an


electric current or the reproduction of patterned sound waves are transmitted
through a wire and into the telephone receiver. Once this is completed, they
are then converted back into sound waves.

1.10.2 Digital Transmission

In digital transmission the signals are converted into a binary code,


which consists of two elements—positive and non-positive. Morse code and
the "on and off" flashing of a light are basic examples. Positive is expressed
as the number 1, while non-positive is expressed as the number 0. Numbers
that are expressed as a string of 0s and 1s are called binary numbers. Every
digit in a binary number is referred to as a bit and represents a power of two.
For example, in the binary number 101, the 1 at the right represents 1 x 2º;

23
the 0 in the middle represents 0 x 2¹; and the 1 to the far left represents 1 x
2². The decimal equivalent of 101 is (1 x 2²) + (0 x 2¹) + (1 x 2º) = 4 + 0 + 1
= 5. In a standard code used by most computers, the letter "A" is expressed
in 8 bits as 01000001.

As an example of digital transmission, in a type of digital telephone


system, coded light signals produced by a rapidly flashing laser travels
through optical fibers (thin strands of glass) and are then decoded by the
receiver. When transmitting a telephone conversation, the light flashes on
and off about 450 million times per second. This high rate enables two
optical fibers to carry about 15,000 conversations simultaneously.
Digital format is ideal for electronic communication as the string of
1s and 0s can be transmitted by a series of "on/off" signals represented by
pulses of electricity or light. A pulse "on" can represent a 1, and the lack of a
pulse "off" can represent a 0. Information in this form is very much easier to
store electronically. Furthermore, digital transmission is usually faster and
involves less noise and disturbances as compared to analog data
transmission.

This transformation of binary information into a two-state signal is


done by the DCE, also known as the base band decoder, which is the origin
of the name base band transmission to designate digital transmission...

24
1.10.3 Translating Information
Computers translate information from the computer user into binary
code in a process called digital encoding. Letters can be encoded by
replacing every letter with its numerical position (1-26) in the alphabet, and
then converting these decimal numbers into binary equivalents. A sound can
be encoded as a series of numbers that measure its pitch and volume at each
instant in time. An image can be encoded as a sequence of numbers that
represent the color and brightness of each portion of the picture. The
computer is able to decode information by converting the numbers back into
letters, sounds, or images. In the 1960s, computer scientists discovered how
to translate audio and video information into computer data, by expressing
every point in a color video image and every instant of sound as a string of
1s and 0s. TV programs and movies that have been digitized in that way can
be held in the memory of a computer as easily as textual documents.
However, encoding TV images required a huge number of 1s and 0s (or
bits). One TV signal sent digitally meant 90 million bits per second, which
was highly impractical, since it would take several channels to convey a
single digital TV signal. However, with the invention of digital compression

25
in the late 1980s, the pictures could be transmitted in a highly abbreviated
form.

A modem is a device used to convert between analog and digital


signals. They are often used to enable computers to communicate with each
other across telephone lines. A computer sends digital signals, which are
converted by the modem to analog signals that can be transmitted through
telephone lines. When the signal reaches its destination, another modem
reconstructs the original digital signal so as to enable the receiving computer
to process the data. To convert a digital signal to an analog one, a modem
generates a carrier wave, and modulates it according to the digital signal.
The kind of modulation depends on the application and speed of operation
for which the modem was designed. For example, many high-speed modems
use a combination of amplitude modulation (where the amplitude of carrier
wave is changed to encode the digital information) and phase modulation
(where the phase of a carrier wave is changed to encode the digital
information). The process of receiving the analog signal and converting it
back to a digital signal is called demodulation. In fact, the word modem is
derived from its 2 basic functions—modulation and demodulation.

1.10.4 The principles of analog transmission

Analog data transmission consists of sending information over a


physical transmission medium in the form of a wave. Data is transmitted via
a carrier wave, a simple wave whose only purpose is to transport data by
modification of one of its characteristics (amplitude, frequency or phase),
and for this reason analog transmission is generally called carrier wave

26
modulation transmission. Three types of analog transmission are defined
depending on which parameter of the carrier wave is being varied:

Transmission by amplitude modulation of the carrier wave


Transmission by frequency modulation of the carrier wave
Transmission by phase modulation of the carrier wave

1.10.5 Analog transmission of analog data

This type of transmission refers to a scheme in which the data to be


transmitted are already in analog form. So, to transmit this signal, the DCTE
must continuously convolve the signal to be transmitted and the carrier
wave, so that the wave it will transmit will be a combination of the carrier
wave and the signal to be transmitted. In the case of transmission by
amplitude modulation, for example, transmission occurs as follows:

1.10.6 Analog transmission of digital data

When digital data appeared on the scene, the transmission systems


were still analog, so it was necessary to find a means of transmitting digital

27
data in an analog manner. The solution to this problem was the modem. It
performs the following functions:

When transmitting: to convert digital data (a sequence of 0s and 1s)


into analog signals (continuous variation of a physical phenomenon).
This process is called modulation.
When receiving: convert the analog signal into digital data. This
process is called demodulation.

In fact, the word modem is an acronym for MOdulator/DEModulator...

1.11 Signal encoding

To optimise transmission, the signal must be encoded to facilitate its


transmission on the physical medium. There are various encoding systems
for this purpose which can be divided into two categories:

Two-level encoding: the signal can only take on a strictly negative or


strictly positive value (-X or +X, where X represents a value of the physical
quantity being used to transport the signal).
Three-level encoding: the signal can take on a strictly negative, null or
strictly positive value (-X, 0 or +X).

The signal encoding can be of following types :

1.11.1 NRZ Encoding

NRZ encoding (meaning No Return to Zero), is the first encoding


system, and also the simplest. It consists of simply transforming the 0s into -
X and the 1s into +X, which results in a bipolar encoding in which the signal

28
is never null. As a result, the receiver can determine whether a signal is
present or not.

1.11.2 NRZI Encoding

NRZI encoding is significantly different from NRZ encoding. With


this encoding, when the bit value is 1, the signal changes state after the clock
tick. When the bit value is 0, the signal does not change state.

NRZI encoding has numerous advantages, including:

Detection of whether a signal is present or not

29
The need for a low signal transmission current

However, it does have one problem: the presence of continuous current


during a sequence of zeros, which disturbs the synchronisation between
transmitter and receiver.

1.11.3 Manchester Encoding

Manchester encoding, also called biphase encoding or PE (for


Phase Encode), introduces a transition in the middle of each interval. In fact,
it amounts to performing an exclusive OR (XOR) of the signal with the
clock signal, which translates into a raising edge when the bit value is zero
and a falling edge in the opposite case.

Manchester encoding has numerous advantages, few of them are:

As it does not take on a zero value, it is possible for the receiver to


detect a signal.
A spectrum occupying a wide band.

30
1.11.4 Delay Encoding (by Miller)

Delay encoding, also called Miller encoding, is similar to


Manchester encoding, except that a transition occurs in the middle of an
interval only when the bit is 1, which allows higher data rates...

1.11.5 Bipolar encoding

Bipolar encoding is a three-level encoding. It therefore uses three


states of the quantity transported on the physical medium:

The value 0 when the bit value is 0


Alternatively X and -X when the bit value is 1

31
1.12 Asynchronous and Synchronous transmission

Given the problems that arise with a parallel-type connection, serial


connections are normally used. However, since a single wire transports the
information, the problem is how to synchronise the transmitter and receiver,
in other words, the receiver can not necessarily distinguish the characters (or
more generally the bit sequences) because the bits are sent one after the
other. There are two types of transmission that address this problem:

1.12.1 Asynchronous transmission

In this type of transmission each character is sent at irregular


intervals in time (for example a user sending characters entered at the
keyboard in real time). So, for example, imagine that a single bit is
transmitted during a long period of silence... the receiver will not be able to
know if this is 00010000, 10000000 or 00000100...
To remedy this problem, each character is preceded by some
information indicating the start of character transmission (the transmission
start information is called a START bit) and ends by sending end-of-

32
transmission information (called STOP bit, there may even be several STOP
bits).

Asynchronous transmission uses start and stop bits to signify the


beginning bit ASCII character would actually be transmitted using 10 bits
e.g.: A "0100 0001" would become "1 0100 0001 0". The extra one (or zero
depending on parity bit) at the start and end of the transmission tells the
receiver first that a character is coming and secondly that the character has
ended. This method of transmission is used when data are sent intermittently
as opposed to in a solid stream. In the previous example the start and stop
bits are in bold. The start and stop bits must be of opposite polarity. This
allows the receiver to recognize when the second packet of information is
being sent.

1.12.2 Synchronous transmission:

In this type of transmission the transmitter and receiver are paced by


the same clock. The receiver continuously receives (even when no bits are
transmitted) the information at the same rate the transmitter send it. This is
why the transmitter and receiver are paced at the same speed. In addition,
supplementary information is inserted to guarantee that there are no errors
during transmission. During synchronous transmission, the bits are sent
successively with no separation between each character, so it is necessary to
insert synchronisation elements; this is called character-level
synchronisation.

Synchronous transmission uses no start and stop bits but instead


synchronizes transmission speeds at both the receiving and sending end of

33
the transmission using clock signals built into each component. A continual
stream of data is then sent between the two nodes. Due to there being no
start and stop bits the data transfer rate is quicker although more errors will
occur, as the clocks will eventually get out of sync, and the receiving device
would have the wrong time that had been agreed in protocol (computing) for
sending/receiving data, so some bytes could become corrupted (by losing
bits). Ways to get around this problem include re-synchronization of the
clocks and use of check digits to ensure the byte is correctly interpreted and
received.

The main disadvantage of synchronous transmission is recognising


the data at the receiver, as there may be differences between the transmitter
and receiver clocks. That is why each data transmission must be sustained
long enough for the receiver to distinguish it. As a result, the transmission
speed can not be very high in a synchronous link.

34
Chapter One
Introduction to Data Transmission
End Chapter Quizzes

1. In simplex transmission
A. Data format is simple
B. Data transmission is one way
C. Data can be transmitted to one way only
D. None of the above

2. In half duplex data transmission


A. Data can be transmitted in one direction only
B Data can be transmitted in both directions
C Data can be transmitted in both directions simultaneously
D None of the above

3. In asynchronous transmission
A Inter character gape is fixed
B Inter character gape is variable
C Inter character gape is always zero
D None

4. In synchronous data transmission data from various users


A Require header
B Do not require header
C Some times require header

35
D None

5. The frequency range used in satellite communication is of the order of


A KHz
B MHz
C GHz
D None of the above

6. Baud, the unit for measuring the data transmission speed, is equal to
A 1 bit per second
B 1 byte per second
C 2 bytes per second
D None of the above

7. Data transfer rate in modems is measured in


A Bits per minute
B Bandwidth
C Bits per second
D None of the above

8. Communication between computer system using standard telephone


service
A Requires a change to analog signal
B Is most efficient
C Produces little noise and few disturbances
D None of the above

36
9. What is telecommunications?
A Any linking of two computers
B Linking two computers through the telephone system
C Linking computer systems through direct high speed links
D All of the above

10. Which of the following is not a bounded media of transmission of RF


energy
A UTP
B STP
C Laser beam
D Fiber optic cable

37
CHAPTER – TWO

DATA TRANSMISSION MEDIA

2.1 Introduction
Media is the general term used to describe the data path that forms
the physical channel between the sender and receiver. Media can be twisted
pair wire such as that used for telephone installation, coaxial cable of various
sizes and electrical characteristics, fiber optics and wireless supporting either
light waves or radio waves. Wire or fiber optic media are referred to as
bounded media. Wireless media are sometimes referred to as unbounded
media. There are different types of physical channels (communication
media) through which data can be transmitted from one point to another.
Some of the most common data transmission media are described as follows:

2.2 Coaxial cable

2.3 Twisted Pair

2.4 Optical fiber

2.2 Coaxial cable

Coaxial cable has long been the preferred form of cabling, for the
simple reason that it is inexpensive and easily handled (weight, flexibility,
etc.). A coaxial cable is made of up a central copper wire (called a core)
surrounded by an insulator, and then a braided metal shield.

38
The jacket protects the cable from the external environment. It is
usually made of rubber (or sometimes Polyvinyl Chloride (PVC) or
Teflon).
The shield (metal envelope) surrounding the cables protects the data
transmitted on the medium from interference (also called noise) that
could corrupt the data.
The insulator surrounding the central core is made of a dielectric
material that prevents any contact with the shield that could cause
electrical interactions (short circuit).
The core, which actually transports the data, generally consists of a
single copper strand or of several braided strands.

Thanks to its shield, coaxial cable can be used over long distances at
high speed (unlike twisted pair cable), however it is usually used for basic
installations. Note that there are also coaxial cables that have a double shield
(one insulating layer, one shield layer) and coaxial cables with four shields
(two insulating layers, two shield layers).

Normally, two types of coaxial cable are used:

2.2.1 10Base2 - thin coaxial cable: (called Thinnet or CheaperNet)


is a thin cable (6 mm in diameter), that is white (or grayish) by
convention. It is very flexible and can be used in most networks by

39
connecting it directly to the network card. It is able to transport a signal
up to around 185 metres without line loss.
It is part of the RG-58 family whose impedance (resistance) is 50 ohms.
The different types of thin coaxial cables are differentiated by the central
part of the cable (core).

Cable Description
RG-58 / U Central core consisting of a single copper strand
RG-58 A/U Braided
RG-58 C/U Military version of RG-58 A/U
RG-59 Wide band transmission (cable television)
Thicker diameter, recommended for higher
RG-6
frequencies than RG-59
RG-62 Arcnet Network

2.2.2 10Base5 - thick coaxial cable: (Thicknet or Thick Ethernet and


also called Yellow Cable, because of its yellow colour - by convention) is a
shielded cable with a thicker diameter (12 mm) and 50 ohm impedance. It
was used for a long time in Ethernet networks, which is why it is also known
as "Standard Ethernet Cable". Given that it has a larger-diameter core, it is
able to carry signals over long distances: up to 500 meters without line loss
(and without signal reamplification). It has a bandwidth of 10 Mbps and is
very often used as a backbone to connect networks whose computers are
connected with Thinnet. However, because of its diameter, it is less flexible
than Thinnet.

Transceiver: the connection between Thinnet and Thicknet

Thinnet and Thicknet are connected using a transceiver. It is


equipped with a so-called "vampire" plug that makes the real physical

40
connection to the central part of the Thinnet by piercing the insulating
envelope. The transceiver cable (drop cable) connects to an AUI
(Attachment Unit Interface) connector, also called a DIX (Digital Intel
Xerox) connector or a DB 15 (SUB-D 15) connector.

Coaxial cable connectors

Thinnet and Thicknet both use BNC (Bayonet-Neill-Concelman or


British Naval Connector) connectors to hook up the cables to computers.
The following connectors are in the BNC family:

BNC cable connector: this is soldered or crimped to the end of the


cable.
BNC T-connector: this connects the computer's network card to the
network cable.
BNC Extender: this joins two coaxial cable segments to form a
longer one.

41
BNC terminator: this is placed at each end of a cable in a Bus
network to absorb interference signals. It is connected to earth. A bus
network cannot function without them. It would stop working.

2.3 Twisted pair cabling

In its simplest form, twisted-pair cable consists of two copper


strands woven into a braid and covered with insulation, typically about 1
mm thick. The wires are twisted together in a helical. The purpose of
twisting a wire is to reduce electrical interference from similar pairs close
by.

Two types of twisted pair cable are generally recognized:

2.3.1 Unshielded Twisted-Pair (UTP ).

2.3.2 Shielded Twisted Pair (STP);

42
A cable is often made of several twisted pairs grouped together
inside a protective jacket. The twisting eliminates noise (electrical
interference) due to adjacent pairs or other sources (motors, relays,
transformers). Twisted pair is therefore suitable for a local network with few
nodes, a limited budget and simple connectivity. However, over long
distances at high data rates it does not guarantee data integrity (i.e. loss-less
data transmission).

2.3.1 Unshielded Twisted Pair (UTP)

UTP cable complies with the 10BaseT specification. This is the


most commonly used twisted pair type and the most widely used on local
networks. Here are some of its characteristics:

Maximum segment length: 100 metres


Composition: 2 copper wires covered with insulation
UTP Standards: determine the number of twists per foot (33 cm) of
cable depending on the intended use
UTP: collected in the EIA/TIA (Electronic Industries Association /
Telecommunication Industries Association) Commercial Building
Wiring Standard 568. The EIA/TIA 568 standard used UTP to create
standards applicable to all sorts of spaces and cabling situations,
thereby guaranteeing the public homogeneous products.

These standards include five categories of UTP cables:

Category 1: Traditional telephone cable (voice but no data


transmission)

43
Category 2: Data transmission up to a maximum of 4 Mbit/s (RNIS).
This type of cable contains 4 twisted pairs
Category 3: 10 Mbit/s maximum. This type of cable contains 4
twisted pairs and 3 twists per foot
Category 4: 16 Mbit/s maximum. This type of cable contains 4
copper twisted pairs
Category 5: 100 Mbit/s maximum. This type of cable contains 4
copper twisted pairs
Category 5e: 1000 Mbit/s maximum. This type of cable contains 4
copper twisted pairs

Most telephone installations use UTP cable. Many buildings are pre-
wired for this type of installation (often in sufficient number to satisfy future
requirements). If the pre-installed twisted pair is of good quality, it can be
used to transfer data in a computer network. Attention must be paid,
however, to the number of twists and other electrical characteristics required
for quality data transmission. UTP's major problem is that it is particularly
susceptible to interference (signals from one line mixing with those of
another line). The only solution to this is shielding.

2.3.2 Shielded Twisted Pair (STP)

STP (Shielded Twisted Pair) cable uses a copper jacket that is of


better quality and more protective that the jacket used for UTP cable. It
contains a protective envelope between the pairs and around the pairs. In an
STP cable, the copper wires of one pair are themselves twisted, which
provides STP cable with excellent shielding, (in other words, better

44
protection against interference). It also allows faster transmission over a
longer distance.

Twisted pair connectors

Twisted pair cable is connected using an RJ-45 connector. This


connector is similar to the RJ-11 used in telephony, but differs on a few
points: RJ-45 is slightly larger and cannot be inserted into an RJ-11 jack. In
addition, the RJ-45 has eight pins while the RJ-11 has no more than six,
usually only four.

2.4 Fiber optics

Fiber optic is the newest form of bounded media. This media is


superior in data handling and security characteristics. Fiber optic cabling is
particularly suited to links between distributors (central link between several
buildings, known as backbone) as it allows connections over long distances
(from several kilometres to 60 km in the case of single-mode fiber) without
requiring earthing. Furthermore, this type of cable is very secure as it is
extremely difficult to tap in to such a cable.

However, despite its mechanical flexibility, this cable type is not


suitable for local network connections as it is difficult to install and is very
expensive. For this reason, twisted pair or coaxial cable are preferred for
short links.

Optical fiber is a cable with numerous advantages:

Light-weight

45
Immune to noise
Low attenuation
Tolerates data rates on the order of 100 Mbps
Bandwidth from tens of megahertz to several gigahertz (monomode
fiber).

2.5 Multiplexing

2.5.1 Introduction

Multiplexing refers to the ability to transmit data coming from


several pairs of equipment (transmitters and receivers) called low-speed
channels on a single physical medium (called the high-speed channel).

A multiplexer is the multiplexing device that combines the signals


from the transmitters and sends them over the high-speed channel. A
demultiplexer is the multiplexing device via which the receivers are
connected to the high-speed channel.

2.5.2 Types of multiplexing

Multiplexing can be studied after dividing it into following types:

46
2.5.2.1 Frequency-division multiplexing

Frequency-division multiplexing, also called FDM, makes it


possible to share the available frequency band on the high-speed channel by
dividing it into a series of narrower-band channels so as to be able to
continuously send signals coming from the different low-speed channels
over the high-speed channel. This process is used, in particular, on telephone
lines and twisted-pair physical connections to increase the data rate.

2.5.2.2 Time-division multiplexing

In time-division multiplexing, also called TDM, the signals from the


different low-speed channels are sampled and transmitted successively on
the high-speed channel by allocating each channel in turn all of the
bandwidth, even if it does not have any data to transmit.

2.5.2.3 Statistical multiplexing

Statistical multiplexing is similar to time-division multiplexing


except that it only transmits low-speed channels that actually have data on
the high-speed channel. The name of this type of multiplexing comes from
the fact that the multiplexers base their behaviour on statistics concerning
the data rate of each low-speed channel. Since the high-speed line does not
transmit the empty channels, performance is better than with time-division
multiplexing.

47
2.6 Types of Data communication

2.6.1 Simplex communication

Simplex communication is a name for a type of communication


circuit. There are two (contradictory) definitions that have been used for the
term. When one definition is used for "simplex", then the other definition is
actually referred to as half duplex.

According to the ANSI definition, a simplex circuit is one where all


signals can flow in only one direction. These systems are often employed
in broadcast networks, where the receivers do not need to send any data
back to the transmitter/broadcaster.

A duplex communication system is a system composed of two


connected parties or devices which can communicate with one another in
both directions. (The term duplex is not used when describing
communication between more than two parties or devices.) Duplex systems
are employed in many communications networks, either to allow for a
communication "two-way street" between two connected parties, or to
provide a "reverse path" for the monitoring and remote adjustment of
equipment in the field. Systems that don't need the duplex capability include
broadcast systems, where one station transmits, and everyone else just
"listens", and in some missile control systems, where the launcher just needs
to command the missile where to go, and the launcher doesn't need to
receive any information from the missile. Also, there are spacecraft such as
satellites and space probes that have lost their capability to receive any

48
commands, but they can continue to transmit radio signals through their
antennas. Some early satellites (such as Sputnik 1) were designed as
transmit-only spacecraft. Pioneer 6 has transmitted for decades without
being able to receive anything.

2.6.2 Half-duplex communication

A simple illustration of a half-duplex communication system.

A half-duplex system provides for communication in both


directions, but only one direction at a time (not simultaneously). Typically,
once a party begins receiving a signal, it must wait for the transmitter to stop
transmitting, before replying.

An example of a half-duplex system is a two-party system such as a


"walkie-talkie" style two-way radio, wherein one must use "Over" or another
previously designated command to indicate the end of transmission, and
ensure that only one party transmits at a time, because both parties transmit
on the same frequency.

A good analogy for a half-duplex system would be a one lane road


with traffic controllers at each end. Traffic can flow in both directions, but

49
only one direction at a time with this being regulated by the traffic
controllers.

Note that this is one of two contradictory definitions for half-duplex.


This definition matches the ITU-T standard. For more detail, see
Simplex communication.

In automatically-run communications systems, such as two-way


data-links, the time allocations for communications in a half-duplex system
can be firmly controlled by the hardware. Thus, there is no waste of the
channel for switching. For example, station A on one end of the data link
could be allowed to transmit for exactly one second, and then station B on
the other end could be allowed to transmit for exactly one second. And then
this cycle repeats over and over again.

2.6.3 Full-duplex communication

A simple illustration of a full-duplex communication system.

A full-duplex, or sometimes double-duplex system allows


communication in both directions, and unlike half-duplex, allows this to
happen simultaneously. Land-line telephone networks are full-duplex since
they allow both callers to speak and be heard at the same time. A good
analogy for a full-duplex system would be a two-lane road with one lane for
each direction.

50
Examples: Telephone, Mobile Phone, etc.

Two way radios can be, for instance, designed as full-duplex


systems, which transmit on one frequency and receive on a different
frequency. This is also called frequency-division duplex. Frequency-division
duplex systems can be extended to farther distances using pairs of simple
repeater stations, because the communications transmitted on any one
frequency always travel in the same direction.

Full-duplex Ethernet connections work by making simultaneous use


of two physical pairs of twisted cable (which are inside the jacket), where
one pair is used for receiving packets and one pair is used for sending
packets (two pairs per direction for some types of ethernet), to a directly
connected device. This effectively makes the cable itself a collision-free
environment and doubles the maximum data capacity that can be supported
by the connection.

There are several benefits to using full-duplex over half-duplex, which are:

First, time is not wasted since no frames need to be retransmitted as


there are no collisions.
Secondly, the full data capacity is available in both directions because
the send and receive functions are separated.
Third, stations (or nodes) do not have to wait until others complete
their transmission since there is only one transmitter for each twisted
pair.

51
2.7 Emulation of full-duplex in shared physical media

Where channel access methods are used in point-to-multipoint


networks such as cellular networks for dividing forward and reverse
communication channels on the same physical communications medium,
they are known as duplexing methods, such as:

2.7.1 Time-division duplexing

Time-division duplexing is the application of time-division


multiplexing to separate outward and return signals. It emulates full-duplex
communication over a half-duplex communication link. Time-division
duplex has a strong advantage in the case where there is asymmetry of the
uplink and downlink data rates. As the amount of uplink data increases,
more communication capacity can dynamically be allocated to that, and as
the demand shrinks capacity can be taken away. Likewise in the downlink
direction.

Examples of Time Division Duplexing systems are:

The W-CDMA (for indoor use)


UMTS-TDD's TD-CDMA air interface
The TD-SCDMA system
DECT
IEEE 802.16 WiMAX
Half-duplex packet mode networks based on carrier sense multiple
access, for example 2-wire or hubbed Ethernet, Wireless local area

52
networks and Bluetooth, can be considered as Time Division Duplex
systems, albeit not TDMA with fixed frame-lengths.

2.7.2 Frequency-Division Duplexing

Frequency-division duplexing means that the transmitter and


receiver operates at different carrier frequencies. The term is frequently used
in ham radio operation, where an operator is attempting to contact a repeater
station. The station must be able to send and receive a transmission at the
same time, and does so by altering the frequency at which it sends and
receives slightly. This mode of operation is referred to as "duplex mode" or
"offset mode".

Uplink and downlink sub-bands are said to be separated by the


"frequency offset". Frequency-division duplexing can be efficient in the case
of symmetric traffic. In this case time-division duplexing tends to waste
bandwidth during the switch-over from transmitting to receiving, has greater
inherent latency, and may require more complex circuitry.

Another advantage of frequency-division duplexing is that it makes


radio planning easier and more efficient since base stations do not "hear"
each other (as they transmit and receive in different sub-bands) and therefore
will normally not interfere each other. Conversely, with time-division
duplexing systems, care must be taken to keep guard times between
neighboring base stations (which decreases spectral efficiency) or to
synchronize base stations, so that they will transmit and receive at the same
time (which increases network complexity and therefore cost, and reduces

53
bandwidth allocation flexibility as all base stations and sectors will be forced
to use the same uplink/downlink ratio)

Examples of Frequency Division Duplexing systems are:

ADSL and VDSL


Most cellular systems, including the UMTS/WCDMA Frequency
Division Duplexing mode and the cdma2000 system.
IEEE 802.16 WiMax Frequency Division Duplexing mode

2.7.3 Echo cancellation

Echo cancellation can also implement full-duplex communications


over certain types of shared media. In this configuration, both devices send
and receive over the same medium at the same time. When processing the
signal it receives, a transceiver removes the "echo" of the signal it sent,
leaving the other transceiver's signal only.

Echo cancellation is at the heart of the V.32, V.34, V.56, and V.90
modem standards. Echo cancellers are available as both software and
hardware solutions. They can be independent components in a
communications system or integrated into the communication system's
central processing unit. Devices that do not eliminate echo in their systems
sometimes will not produce good full-duplex performance.

54
Chapter Two
Data Transmission Media
End Chapter Quizzes

1. Typical bandwidth of optical fibers is


A Order of G Hz
B Order of K Hz
C Order of Hz
D None

2. In MODEMS
A Several digital signals are multiplexed
B Digital signal changes some characteristics of a carrier wave
C Digital signal is amplified
D None of the above

3. A large number of computers in a large geographical area can be


efficiently connected using
A Twisted pair lines
B Coaxial cables
C Communication satellites
D None of the above

4. Data transfer using telephone system is


A Time division multiplexing
B Space division multiplexing

55
C Frequency division multiplexing
D None of the above

5. Fiber optic communication system uses


A Simplex transmission
B Full duplex
C Half duplex
D None of the above

6. Which of the following wired transmission media is the fastest?


A Twisted pair
B Fiber optics
C Coaxial
D Cellular phone

7. Which of the following is not a commonly used network


architecture?
A Ring
B Star
C Candle
D Multidrop

8. The Ethernet uses which of these transmission mediums?


A Twisted pair
B Coaxial cable
C Fiber optics
D None of the above

56
9. ISDN is a(n) --- technology
A Twisted pair
B Coaxial and fiber
C Wireless
D All fiber

10. In which topology data packet is removed by the source


destination?
A Ring
B Bus
C Star
D None of the above

57
CHAPTER THREE

DATA NETWORKS

3.1 Introduction

A computer network is a group of interconnected computers.


Networks may be classified according to a wide variety of characteristics.
This article provides a general overview of some types and categories and
also presents the basic components of a network. So a network is a
collection of computers and devices connected to each other. The network
allows computers to communicate with each other and share resources and
information. The Advance Research Projects Agency (ARPA) designed
"Advanced Research Projects Agency Network" (ARPANET) for the United
States Department of Defense. It was the first computer network in the world
in late 1960's and early 1970's.
Computer networking is the engineering discipline concerned with
communication between computer systems or devices. Networking, routers,
routing protocols, and networking over the public Internet have their
specifications defined in documents called RFCs. Computer networking is
sometimes considered a sub-discipline of telecommunications, computer
science, information technology and/or computer engineering. Computer
networks rely heavily upon the theoretical and practical application of these
scientific and engineering disciplines.
A computer network is any set of computers or devices connected to
each other with the ability to exchange data. Examples of different networks
are:

58
All networks are interconnected to allow communication with a variety of
different kinds of media, including twisted-pair copper wire cable, coaxial
cable, optical fiber, and various wireless technologies.

3.2 History of Network

Before the advent of computer networks that were based upon some
type of telecommunications system, communication between calculation
machines and early computers was performed by human users by carrying
instructions between them. Many of the social behavior seen in today's
Internet was demonstrably present in nineteenth-century telegraph networks,
and arguably in even earlier networks using visual signals.
In September 1940 George Stibitz used a teletype machine to send
instructions for a problem set from his Model K at Dartmouth College in
New Hampshire to his Complex Number Calculator in New York and
received results back by the same means. Linking output systems like
teletypes to computers was an interest at the Advanced Research Projects
Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a
working group he called the "Intergalactic Network", a precursor to the
ARPANet.
In 1964, researchers at Dartmouth developed the Dartmouth Time
Sharing System for distributed users of large computer systems. The same
year, at MIT, a research group supported by General Electric and Bell Labs
used a computer (DEC's PDP-8) to route and manage telephone connections.
Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies

59
independently conceptualized and developed network systems which used
datagrams or packets that could be used in a packet switched network
between computer systems. 1965 Thomas Merrill and Lawrence G. Roberts
created the first wide area network(WAN). The first widely used PSTN
switch that used true computer control was the Western Electric 1ESS
switch, introduced in 1965.
In 1969 the University of California at Los Angeles, SRI (in
Stanford), University of California at Santa Barbara, and the University of
Utah were connected as the beginning of the ARPANet network using 50
kbit/s circuits. Commercial services using X.25, an alternative architecture
to the TCP/IP suite, were deployed in 1972.
Computer networks, and the technologies needed to connect and
communicate through and between them, continue to drive computer
hardware, software, and peripherals industries. This expansion is mirrored
by growth in the numbers and types of users of networks from the researcher
to the home user.
Today, computer networks are the core of modern communication.
For example, all modern aspects of the Public Switched Telephone Network
(PSTN) are computer-controlled, and telephony increasingly runs over the
Internet Protocol, although not necessarily the public Internet. The scope of
communication has increased significantly in the past decade and this boom
in communications would not have been possible without the progressively
advancing computer network.

60
3.3 Network classification

The following list presents categories used for classifying networks.

3.3.1 Connection method

Computer networks can also be classified according to the hardware


and software technology that is used to interconnect the individual devices in
the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, or
Power line communication.
Ethernet uses physical wiring to connect devices. Frequently deployed
devices include hubs, switches, bridges and/or routers. Wireless LAN
technology is designed to connect devices without wiring. These devices use
radio waves or infrared signals as a transmission medium.

3.3.2 Scale

Based on their scale, networks can be classified as Local Area


Network (LAN), Wide Area Network (WAN), Metropolitan Area Network
(MAN), Personal Area Network (PAN), Virtual Private Network (VPN),
Campus Area Network (CAN), Storage Area Network (SAN), etc.

3.3.3 Functional relationship (network architecture)

Computer networks may be classified according to the functional


relationships which exist among the elements of the network, e.g., Active
Networking, Client-server and Peer-to-peer (workgroup) architecture.

61
3.3.4 Network topology

Computer networks may be classified according to the network


topology upon which the network is based, such as bus network, star
network, ring network, mesh network, star-bus network, tree or hierarchical
topology network. Network topology signifies the way in which devices in
the network see their logical relations to one another. The use of the term
"logical" here is significant. That is, network topology is independent of the
"physical" layout of the network. Even if networked computers are
physically placed in a linear arrangement, if they are connected via a hub,
the network has a Star topology, rather than a bus topology. In this regard
the visual and operational characteristics of a network are distinct; the
logical network topology is not necessarily the same as the physical layout.
Networks may be classified based on the method of data used to convey the
data, these include digital and analog networks.

3.4 Types of networks

Networking is a complex part of computing that makes up most of the


IT Industry. Without networks, almost all communication in the world
would cease to happen. It is because of networking that telephones,
televisions, the internet, etc. work.
Following is a list of the most common types of computer networks.

3.4.1 Personal area network

A personal area network (PAN) is a computer network used for


communication among computer devices close to one person. Some

62
examples of devices that are used in a PAN are printers, fax machines,
telephones, PDAs and scanners. The reach of a PAN is typically about 20-30
feet (approximately 6-9 meters), but this is expected to increase with
technology improvements.

3.4.2 Local area network

A local area network (LAN) is a computer network covering a small


physical area, like a home, office, or small group of buildings, such as a
school, or an airport. Current LANs are most likely to be based on Ethernet
technology. For example, a library may have a wired or wireless LAN for
users to interconnect local devices (e.g., printers and servers) and to connect
to the internet. On a wired LAN, PCs in the library are typically connected
by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a
system of interconnected devices and eventually connect to the Internet. The
cables to the servers are typically on Cat 5e enhanced cable, which will
support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different
IEEE protocol, 802.11b, 802.11g or possibly 802.11n. The staff computers
(bright green in the figure) can get to the color printer, checkout records, and
the academic network and the Internet. All user computers can get to the
Internet and the card catalog. Each workgroup can get to its local printer.
Note that the printers are not accessible from outside their workgroup.
Typical library network, in a branching tree topology and controlled
access to resources. All interconnected devices must understand the network
layer (layer 3), because they are handling multiple subnets (the different
colors). Those inside the library, which have only 10/100 Mbit/s Ethernet
connections to the user device and a Gigabit Ethernet connection to the

63
central router, could be called "layer 3 switches" because they only have
Ethernet interfaces and must understand IP. It would be more correct to call
them access routers, where the router at the top is a distribution router that
connects to the Internet and academic networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (wide area
networks), include their higher data transfer rates, smaller geographic range,
and lack of a need for leased telecommunication lines. Current Ethernet or
other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This
is the data transfer rate. IEEE has projects investigating the standardization
of 100 Gbit/s, and possibly 400 Gbit/s.

3.4.3 Campus area network

A campus area network (CAN) is a computer network made up of


an interconnection of local area networks (LANs) within a limited
geographical area. It can be considered one form of a metropolitan area
network, specific to an academic setting.
In the case of a university campus-based campus area network, the
network is likely to link a variety of campus buildings including; academic
departments, the university library and student residence halls. A campus
area network is larger than a local area network but smaller than a wide area
network (WAN) (in some cases).
The main aim of a campus area network is to facilitate students
accessing internet and university resources. This is a network that connects
two or more LANs but that is limited to a specific and contiguous
geographical area such as a college campus, industrial complex, office
building, or a military base. A CAN may be considered a type of MAN

64
(metropolitan area network), but is generally limited to a smaller area than a
typical MAN. This term is most often used to discuss the implementation of
networks for a contiguous area. This should not be confused with a
Controller Area Network. A LAN connects network devices over a relatively
short distance. A networked office building, school, or home usually
contains a single LAN, though sometimes one building will contain a few
small LANs (perhaps one per room), and occasionally a LAN will span a
group of nearby buildings. In TCP/IP networking, a LAN is often but not
always implemented as a single IP subnet.

3.4.4 Metropolitan area network

A metropolitan area network (MAN) is a network that connects two


or more local area networks or campus area networks together but does not
extend beyond the boundaries of the immediate town/city. Routers, switches
and hubs are connected to create a metropolitan area network.

3.4.5 Wide area network

A wide area network (WAN) is a computer network that covers a


broad area (i.e. any network whose communications links cross
metropolitan, regional, or national boundaries [1]). Less formally, a WAN is
a network that uses routers and public communications links [1]. Contrast
with personal area networks (PANs), local area networks (LANs), campus
area networks (CANs), or metropolitan area networks (MANs), which are
usually limited to a room, building, campus or specific metropolitan area
(e.g. a city) respectively. The largest and most well-known example of a
WAN is the Internet. A WAN is a data communications network that covers

65
a relatively broad geographic area (i.e. one city to another and one country to
another country) and that often uses transmission facilities provided by
common carriers, such as telephone companies. WAN technologies
generally function at the lower three layers of the OSI reference model: the
physical layer, the data link layer, and the network layer.

3.4.6 Wireless network

Wireless network refers to any type of computer network that is


wireless, and is commonly associated with a telecommunications network
whose interconnections between nodes is implemented without the use of
wires. Wireless telecommunications networks are generally implemented
with some type of remote information transmission system that uses
electromagnetic waves, such as radio waves, for the carrier and this
implementation usually takes place at the physical level or "layer" of the
network.

A wireless network is basically the same as a LAN or a WAN but


there are no wires between hosts and servers. The data is transferred over
sets of radio transceivers. These types of networks are beneficial when it is
too costly or inconvenient to run the necessary cables. For more information,
see Wireless LAN and Wireless wide area network. The media access
protocols for LANs come from the IEEE. The most common IEEE 802.11
WLANs cover, depending on antennas, ranges from hundreds of meters to a
few kilometers. For larger areas, either communications satellites of various
types, cellular radio, or wireless local loop (IEEE 802.16) all have
advantages and disadvantages. Depending on the type of mobility needed,
the relevant standards may come from the IETF or the ITU.

66
3.4.6.1 Types of wireless networks

a. Wireless PAN

Wireless Personal Area Network (WPAN) is a type of wireless


network that interconnects devices within a relatively small area, generally
within reach of a person. For example, Bluetooth provides a WPAN for
interconnecting a headset to a laptop. ZigBee also supports WPAN
applications.

b. Wireless LAN

Wireless Local Area Network (WLAN) is a wireless alternative to a


computer Local Area Network (LAN) that uses radio instead of wires to
transmit data back and forth between computers in a small area such as a
home, office, or school. Wireless LANs are standardized under the IEEE
802.11 series.

Screenshots of Wi-Fi Network connections in Microsoft Windows. Figure 1,


left, shows that not all networks are encrypted (locked unless you have the
code, or key), which means anyone in range can access them. Figures 2 and
3, middle and right, however, show that many networks are encrypted.

67
Wi-Fi: Wi-Fi is a commonly used wireless network in computer
systems to enable connection to the internet or other devices that have
Wi-Fi functionalities. Wi-Fi networks broadcast radio waves that can
be picked up by Wi-Fi receivers attached to different computers or
mobile phones.
Fixed Wireless Data: This implements point to point links between
computers or networks at two locations, often using dedicated
microwave or laser beams over line of sight paths. It is often used in
cities to connect networks in two or more buildings without physically
wiring the buildings together.

c. Wireless MAN

Wireless Metropolitan area networks are a type of wireless network that


connects several Wireless LANs.

WiMAX is the term used to refer to wireless MANs and is covered in


IEEE 802.16d/802.16e.

3.4.7 Mobile devices networks

In recent decades with the development of smart phones, cellular


telephone networks have been used to carry computer data in addition to
telephone conversations:

Global System for Mobile Communications (GSM): The GSM


network is divided into three major systems: the switching system, the
base station system, and the operation and support system. The cell
phone connects to the base system station which then connects to the

68
operation and support station; it then connects to the switching station
where the call is transferred to where it needs to go. GSM is the most
common standard and is used for a majority of cell phones.[4]
Personal Communications Service (PCS): PCS is a radio band that
can be used by mobile phones in North America. Sprint happened to
be the first service to set up a PCS.
D-AMPS: D-AMPS, which stands for Digital Advanced Mobile
Phone Service, is an upgraded version of AMPS but it is being phased
out due to advancement in technology. The newer GSM networks are
replacing the older system.

3.4.7.1 Uses of mobile devices networks

1. Wireless networks have had a significant impact on the world as far back
as World War II. Through the use of wireless networks, information could
be sent overseas or behind enemy lines easily, efficiently and more reliably.
Since then, wireless networks have continued to develop and their uses have
grown significantly. Cellular phones are part of huge wireless network
systems. People use these phones daily to communicate with one another.
Sending information overseas is possible through wireless network systems
using satellites and other signals to communicate across the world.
Emergency services such as the police department utilize wireless networks
to communicate important information quickly. People and businesses use
wireless networks to send and share data quickly whether it be in a small
office building or across the world.

2. Another important use for wireless networks is as an inexpensive and


rapid way to be connected to the Internet in countries and regions where the

69
telecom infrastructure is poor or there is a lack of resources, as in most
developing countries.

3. Compatibility issues also arise when dealing with wireless networks.


Different components not made by the same company may not work
together, or might require extra work to fix these issues. Wireless networks
are typically slower than those that are directly connected through an
Ethernet cable.

4. A wireless network is more vulnerable, because anyone can try to break


into a network broadcasting a signal. Many networks offer WEP - Wired
Equivalent Privacy - security systems which have been found to be
vulnerable to intrusion. Though WEP does block some intruders, the
security problems have caused some businesses to stick with wired networks
until security can be improved. Another type of security for wireless
networks is WPA - Wi-Fi Protected Access. WPA provides more security to
wireless networks than a WEP security set up. The use of firewalls will help
with security breaches which can help to fix security problems in some
wireless networks that are more vulnerable.

3.4.8 Global area network

A global area networks (GAN) specification is in development by


several groups, and there is no common definition. In general, however, a
GAN is a model for supporting mobile communications across an arbitrary
number of wireless LANs, satellite coverage areas, etc. The key challenge in
mobile communications is "handing off" the user communications from one

70
local coverage area to the next. In IEEE Project 802, this involves a
succession of terrestrial WIRELESS local area networks (WLAN).

3.4.9 Virtual private network

A virtual private network (VPN) is a computer network in which


some of the links between nodes are carried by open connections or virtual
circuits in some larger network (e.g., the Internet) instead of by physical
wires. The link-layer protocols of the virtual network are said to be tunneled
through the larger network when this is the case. One common application is
secure communications through the public Internet, but a VPN need not
have explicit security features, such as authentication or content encryption.
VPNs, for example, can be used to separate the traffic of different user
communities over an underlying network with strong security features. A
VPN may have best-effort performance, or may have a defined service level
agreement (SLA) between the VPN customer and the VPN service provider.
Generally, a VPN has a topology more complex than point-to-point. A VPN
allows computer users to appear to be editing from an IP address location
other than the one which connects the actual computer to the Internet.

3.4.10 Internetwork

Internetworking involves connecting two or more distinct computer


networks or network segments via a common routing technology. The result
is called an internetwork (often shortened to internet). Two or more
networks or network segments connected using devices that operate at layer

71
3 (the 'network' layer) of the OSI Basic Reference Model, such as a router.
Any interconnection among or between public, private, commercial,
industrial, or governmental networks may also be defined as an
internetwork. In modern practice, the interconnected networks use the
Internet Protocol. There are at least three variants of internetwork,
depending on who administers and who participates in them:

Intranet
Extranet
Internet

Intranets and extranets may or may not have connections to the


Internet. If connected to the Internet, the intranet or extranet is normally
protected from being accessed from the Internet without proper
authorization. The Internet is not considered to be a part of the intranet or
extranet, although it may serve as a portal for access to portions of an
extranet.

Intranet

An intranet is a set of networks, using the Internet Protocol and IP-


based tools such as web browsers and file transfer applications, that is under
the control of a single administrative entity. That administrative entity closes
the intranet to all but specific, authorized users. Most commonly, an intranet
is the internal network of an organization. A large intranet will typically
have at least one web server to provide users with organizational
information.

Extranet

72
An extranet is a network or internetwork that is limited in scope to a
single organization or entity but which also has limited connections to the
networks of one or more other usually, but not necessarily, trusted
organizations or entities (e.g. a company's customers may be given access to
some part of its intranet creating in this way an extranet, while at the same
time the customers may not be considered 'trusted' from a security
standpoint). Technically, an extranet may also be categorized as a CAN,
MAN, WAN, or other type of network, although, by definition, an extranet
cannot consist of a single LAN; it must have at least one connection with an
external network.

Internet

The Internet is a specific internetwork. It consists of a worldwide


interconnection of governmental, academic, public, and private networks
based upon the networking technologies of the Internet Protocol Suite. It is
the successor of the Advanced Research Projects Agency Network
(ARPANET) developed by DARPA of the U.S. Department of Defense. The
Internet is also the communications backbone underlying the World Wide
Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a
proper noun, for historical reasons and to distinguish it from other generic
internetworks. Participants in the Internet use a diverse array of methods of
several hundred documented, and often standardized, protocols compatible
with the Internet Protocol Suite and an addressing system (IP Addresses)
administered by the Internet Assigned Numbers Authority and address
registries. Service providers and large enterprises exchange information

73
about the reachability of their address spaces through the Border Gateway
Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

3.5 Views of networks

Users and network administrators often have different views of their


networks. Often, users share printers and some servers form a workgroup,
which usually means they are in the same geographic location and are on the
same LAN. A community of interest has less of a connotation of being in a
local area, and should be thought of as a set of arbitrarily located users who
share a set of servers, and possibly also communicate via peer-to-peer
technologies.
Network administrators see networks from both physical and logical
perspectives. The physical perspective involves geographic locations,
physical cabling, and the network elements (e.g., routers, bridges and
application layer gateways that interconnect the physical media. Logical
networks, called, in the TCP/IP architecture, subnets , map onto one or more
physical media. For example, a common practice in a campus of buildings is
to make a set of LAN cables in each building appear to be a common subnet,
using virtual LAN (VLAN) technology.
Both users and administrators will be aware, to varying extents, of the
trust and scope characteristics of a network. Again using TCP/IP
architectural terminology, an intranet is a community of interest under
private administration usually by an enterprise, and is only accessible by
authorized users (e.g. employees). Intranets do not have to be connected to
the Internet, but generally have a limited connection. An extranet is an

74
extension of an intranet that allows secure communications to users outside
of the intranet (e.g. business partners, customers). Informally, the Internet is
the set of users, enterprises and content providers that are interconnected by
Internet Service Providers (ISP). From an engineering standpoint, the
Internet is the set of subnets, and aggregates of subnets, which share the
registered IP address space and exchange information about the reachability
of those IP addresses using the Border Gateway Protocol. Typically, the
human-readable names of servers are translated to IP addresses,
transparently to users, via the directory function of the Domain Name
System (DNS).
Over the Internet, there can be business-to-business (B2B), business-
to-consumer (B2C) and consumer-to-consumer (C2C) communications.
Especially when money or sensitive information is exchanged, the
communications are apt to be secured by some form of communications
security mechanism. Intranets and extranets can be securely superimposed
onto the Internet, without any access by general Internet users, using secure
Virtual Private Network (VPN) technology. When used for gaming one
computer will have to be the server while the others play through it.

3.6 Network topology

The network topology defines the way in which computers, printers,


and other devices are connected, physically and logically. A network
topology describes the layout of the wire and devices as well as the paths
used by data transmissions.

75
Network topology has two types:

a. Physical

b. logical

Commonly used topologies include the following :

3.6.1 Bus

3.6.2 Ring

3.6.3 Star

3.6.4 Mesh

o partially connected
o fully connected (sometimes known as fully
redundant)

3.6.1 Multidrop bus

A multidrop bus (MDB) is a computer bus in which all components


are connected to the same set of electrical wires. A process of arbitration
determines which device gets the right to be the sender of information at any
point in time. The other devices must listen for the data that is intended to be
received by them.

76
Multidrop buses have the advantage of simplicity and extensibility,
but electronically are limited to around 200–400 MHz (because of
reflections on the wire from the printed circuit board (PCB) onto the die) and
10–20 cm distance (SCSI-1 has 6 metres). Multidrop standards such as PCI
are therefore being replaced by point-to-point systems such as PCI Express.
Multidrop buses are also used by vending machine controllers to
communicate with the vending machine's components, such as a currency
detector (coin or note reader). Not surprisingly, these MDB buses
communicate with the MDB protocol, a 9-bit serial protocol.

MDB in Vending Machines

MDB is utilized in Vending machines to connect the Bill Acceptors


and Coin Changer mechanisms. This evolved as the standard in vending
machines after 1995. With this innovation, the vending machine industry
also failed to be able to arbitrate multiple devices on one bus (i.e. 3 Bill
Acceptors and 2 coin mechanisms). Hence, it is only possible to put 1 MDB
compliant Bill Acceptor and 1 MDB compliant coin mechanism in 1 or a
series of vending machines.

Bus network

A bus network topology is a network architecture in which a set of


clients are connected via a shared communications line, called a bus. There

77
are several common instances of the bus architecture, including one in the
motherboard of most computers, and those in some versions of Ethernet
networks. Bus networks are the simplest way to connect multiple clients, but
may have problems when two clients want to transmit at the same time on
the same bus. Thus systems which use bus network architectures normally
have some scheme of collision handling or collision avoidance for
communication on the bus, quite often using Carrier Sense Multiple Access
or the presence of a bus master which controls access to the shared bus
resource.

A true bus network is passive – the computers on the bus simply listen
for a signal; they are not responsible for moving the signal along. However,
many active architectures can also be described as a "bus", as they provide
the same logical functions as a passive bus; for example, switched Ethernet
can still be regarded as a logical bus network, if not a physical one. Indeed,
the hardware may be abstracted away completely in the case of a software
bus. With the dominance of switched Ethernet over passive Ethernet, passive
bus networks are uncommon in wired networks. However, almost all current
wireless networks can be viewed as examples of passive bus networks, with
radio propagation serving as the shared passive medium.

The bus topology makes the addition of new devices straightforward.


The term used to describe clients is station or workstation in this type of
network. Bus network topology uses a broadcast channel which means that
all attached stations can hear every transmission and all stations have equal
priority in using the network to transmit data.

78
3.6.1.1 Advantages and disadvantages of a bus network

Advantages : Following are the main advantages of bus


topology:-

Easy to implement and extend


Well suited for temporary or small networks not requiring high speeds
(quick setup)
Cheaper than other topologies.
Cost effective as only a single cable is used
Cable faults are easily identified

Disadvantages : Following are the main disadvantages of bus


topology:-

Limited cable length and number of stations.


If there is a problem with the cable, the entire network goes down.
Maintenance costs may be higher in the long run.
Performance degrades as additional computers are added or on heavy
traffic.
Proper termination is required (loop must be in closed path).
Significant Capacitive Load (each bus transaction must be able to
stretch to most distant link).
It works best with limited number of nodes.
It is slower than the other topologies.

79
3.6.2 Ring network

A ring network is a network topology in which each node connects to


exactly two other nodes, forming a single continuous pathway for signals
through each node - a ring. Data travels from node to node, with each node
along the way handling every packet. Because a ring topology provides only
one pathway between any two nodes, ring networks may be disrupted by the
failure of a single link. A node failure or cable break might isolate every
node attached to the ring. FDDI networks overcome this vulnerability by
sending data on a clockwise and a counterclockwise ring: in the event of a
break data is wrapped back onto the complementary ring before it reaches
the end of the cable, maintaining a path to every node along the resulting "C-
Ring". 802.5 networks -- also known as IBM Token Ring networks -- avoid
the weakness of a ring topology altogether: they actually use a star topology
at the physical layer and a Multistation Access Unit to imitate a ring at the
datalink layer.

Many ring networks add a "counter-rotating ring" to form a redundant


topology. Such "dual ring" networks include Spatial Reuse Protocol, Fiber
Distributed Data Interface (FDDI), and Resilient Packet Ring.

80
3.6.2.1 Advantages and disadvantages of a ring network

Advantages : Following are the main advantages of ring


topology:-

Very orderly network where every device has access to the token and
the opportunity to transmit
Performs better than a star topology under heavy network load
Can create much larger network using Token Ring
Does not require network server to manage the connectivity between
the computers

Disadvantages : Following are the main disadvantages of ring


topology:-

One malfunctioning workstation or bad port in the MAU can create


problems for the entire network
Moves, adds and changes of devices can affect the network
Network adapter cards and MAU's are much more expensive than
Ethernet cards and hubs
Much slower than an Ethernet network under normal load

Misconceptions

"Token Ring is an example of a ring topology." 802.5 (Token Ring)


networks do not use a ring topology at layer 1. As explained above,
IBM Token Ring (802.5) networks imitate a ring at layer 2 but use a
physical star at layer 1.

81
"Rings prevent collisions." The term "ring" only refers to the layout of
the cables. It is true that there are no collisions on an IBM Token
Ring, but this is because of the layer 2 Media Access Control method,
not the physical topology (which again is a star, not a ring.) Token
passing, not rings, prevent collisions.

"Token passing happens on rings." Token passing is a way of managing


access to the cable, implemented at the MAC sublayer of layer 2. Ring
topology is the cable layout at layer one. It is possible to do token
passing on a bus (802.4) a star (802.5) or a ring (FDDI).

3.6.3 Star network

Star networks are one of the most common computer network


topologies. In its simplest form, a star network consists of one central
switch, hub or computer, which acts as a conduit to transmit messages. Thus,
the hub and leaf nodes, and the transmission lines between them, form a
graph with the topology of a star. If the central node is passive, the
originating node must be able to tolerate the reception of an echo of its own
transmission, delayed by the two-way transmission time (i.e. to and from the
central node) plus any delay generated in the central node. An active star

82
network has an active central node that usually has the means to prevent
echo-related problems.

The star topology reduces the chance of network failure by connecting


all of the systems to a central node. When applied to a bus-based network,
this central hub rebroadcasts all transmissions received from any peripheral
node to all peripheral nodes on the network, sometimes including the
originating node. All peripheral nodes may thus communicate with all others
by transmitting to, and receiving from, the central node only. The failure of a
transmission line linking any peripheral node to the central node will result
in the isolation of that peripheral node from all others, but the rest of the
systems will be unaffected.

3.6.3.1 Advantages and disadvantages of a star network

Advantages : Following are the main advantages of star


topology:-

Better performance: Passing of Data Packet through unnecessary


nodes is prevented by this topology. At most 3 devices and 2 links are
involved in any communication between any two devices which are
part of this topology. This topology induces a huge overhead on the
central hub, however if the central hub has adequate capacity, then
very high network utilization by one device in the network does not
affect the other devices in the network.

83
Isolation of devices: Each device is inherently isolated by the link
that connects it to the hub. This makes the isolation of the individual
devices fairly straightforward, and amounts to disconnecting the
device from the hub. This isolated nature also prevents any non-
centralized failure to affect the network.
Benefits from centralization: As the central hub is the bottleneck,
increasing capacity of the central hub or adding additional devices to
the star, can help scale the network very easily. The central nature also
allows the inspection traffic through the network. This can help
analyze all the traffic in the network and determine suspicious
behavior.
Simplicity: The topology is easy to understand, establish, and
navigate. The simple topology obviates the need for complex routing
or message passing protocols. As noted earlier, the isolation and
centralization simplifies fault detection, as each link or device can be
probed individually.

Disadvantages : Following are the main disadvantages of star


topology:-

The primary disadvantage of a star topology is the high


dependence of the system on the functioning of the central hub.
While the failure of an individual link only results in the
isolation of a single node, the failure of the central hub renders
the network inoperable, immediately isolating all nodes. The
performance and scalability of the network also depend on the
capabilities of the hub.

84
Network size is limited by the number of connections that can
be made to the hub, and performance for the entire network is
capped by its throughput. While in theory traffic between the
hub and a node is isolated from other nodes on the network,
other nodes may see a performance drop if traffic to another
node occupies a significant portion of the central node's
processing capability or throughput.
Furthermore, wiring up of the system can be very complex.
ken passing is not restricted to rings.

3.6.4 Mesh networking

Mesh networking is a way to route data, voice and instructions


between nodes. It allows for continuous connections and reconfiguration
around broken or blocked paths by ―hopping‖ from node to node until the
destination is reached. A mesh network whose nodes are all connected to
each other is a fully connected network. Mesh networks differ from other
networks in that the component parts can all connect to each other via
multiple hops, and they generally are not mobile. Mesh networks can be
seen as one type of ad hoc network. Mobile ad hoc networks (MANET) and
mesh networks are therefore closely related, but MANET also have to deal
with the problems introduced by the mobility of the nodes.

85
Mesh networks are self-healing: the network can still operate even
when a node breaks down or a connection goes bad. As a result, this network
is very, very reliable. This concept is applicable to wireless networks, wired
networks, and software interaction. The animation at right illustrates how
wireless mesh networks can self form and self heal.

Wireless mesh networks is the most topical application of mesh


architectures. Wireless mesh was originally developed for military
applications but have undergone significant evolution in the past decade. As
the cost of radios plummeted, single radio products evolved to support more
radios per mesh node with the additional radios providing specific functions-
such as client access, backhaul service or scanning radios for high speed
handover in mobility applications. The mesh node design also became more
modular - one box could support multiple radio cards - each operating at a
different frequency.

Examples

In early 2007, the US-based firm Meraki launched a mini wireless


mesh router. This is an example of a wireless mesh network (on a
claimed speed of up to 50 megabits per second). The 802.11 radio
within the Meraki Mini has been optimized for long-distance
communication, providing coverage over 250 meters. This is an
example of a single-radio mesh network being used within a
community as opposed to multi-radio long range mesh networks like
Belair, Strix Systems, or MeshDynamics, that provide multifunctional
infrastructure.

86
The Naval Postgraduate School, Monterey CA, demonstrated a
wireless mesh network for border security. In a pilot system, aerial
cameras kept aloft by balloons relayed real time high resolution video
to ground personnel via a mesh network.
An MIT Media Lab project has developed the XO-1 laptop or
"OLPC" which is intended for under-privileged schools in developing
nations and uses mesh networking (based on the IEEE 802.11s
standard) to create a robust and inexpensive infrastructure. The
instantaneous connections made by the laptops are claimed by the
project to reduce the need for an external infrastructure such as the
internet to reach all areas, because a connected node could share the
connection with nodes nearby. A similar concept has also been
implemented by Greenpacket with its application called SONbuddy.
In Cambridge, UK, on the 3rd June 2006, mesh networking was used
at the ―Strawberry Fair‖ to run mobile live television, radio and
internet services to an estimated 80,000 people.
The Champaign-Urbana Community Wireless Network (CUWiN)
project is developing mesh networking software based on open source
implementations of the Hazy-Sighted Link State Routing Protocol and
Expected Transmission Count metric.
SMesh is an 802.11 multi-hop wireless mesh network developed by
the Distributed System and Networks Lab at Johns Hopkins
University. A fast handoff scheme allows mobile clients to roam in
the network without interruption in connectivity, a feature suitable for
real-time applications, such as VoIP.

87
Many mesh networks operate across multiple radio bands. For
example Firetide and Wave Relay mesh networks have the option to
communicate node to node on 5.2 GHz or 5.8 GHz, but communicate
node to client on 2.4 GHz (802.11). This is accomplished using SDR
(Software-Defined radio.)
The SolarMESH project examined the potential of powering 802.11-
based mesh networks using solar power and rechargeable batteries.
Legacy 802.11 access points were found to be inadequate due to the
requirement that are continuously powered. The IEEE 802.11s
standardization efforts are considering power save options, but solar-
powered applications might involve single radio nodes where relay-
link power saving will be inapplicable.

3.7 Switching
Two different switching techniques are used inside the telephone
system. These techniques are as follows:
3.7.1 Circuit Switching
3.7.2 Packet Switching

3.7.1 Circuit Switching

The telephone network is a circuit-switched network (although there


is also a packet-switched network used for signaling data). In a circuit-
switched network, a dedicated circuit must first be connected. Once the
circuit has been "nailed up", transmission can begin. When the transmission
is complete, the circuit is released for the next transmission.

88
Let us look at a simple telephone call. When we remove the receive
from the telephone and dial a telephone number, the telephone company
searches its database to determine which circuit should be used to deliver the
telephone call. If it is a long distance call, the switch knows it must connect
to another telephone company office, where a switch called a tandem is
located. The tandem switch will then use a circuit that connects it to another
office, the toll office switch. This process continues until there are circuits
connected from the originator to the destination. These circuits cannot be
used for any other telephone call; they are dedicated to this one call until the
call is complete. Once the call is complete, the circuits can then be released
and used for another call.

Circuit switching is not an efficient method for routing any kind of


data, whether it is digital voice or user data. The circuit is wasted much of
the time because no transmission is using the bandwidth of the circuit 100
percent of the time. Any time there are idle period on the circuit, the circuit
is being wasted. It would be much more efficient to have a transmission
facility capable of transmitting many different "conversations" over the same
circuit at the same time. This was achieved (sort of) through multiplexing. A
circuit can be divided into channels, with each channel used for a
transmission. Digitial telephone circuits are multiplexed and are capable of
transmitting several different conversations at the same time on the same
circuit.

There is one catch; each channel then becomes dedicated to the


conversation until the caller disconnects. Then the channel can be released
for another transmission. So in the case where there are 24 channels (the
common denominator in today's digital facilities), there can be 24 different

89
conversations going on at once over the same facility. This is better than
wasting the circuit for one transmission, but it could still be better. Imagine
having no channels. Transmissions are sent over the same circuit as needed,
but there are no limits to the number of conversations that can be sent over
the same facility at the same time.

3.7.2 Packet switching

Packet switching is a network communications method that splits


data traffic (digital representations of text, sound, or video data) into chunks,
called packets, which are then routed over a shared network. To accomplish
this, the original message/data is segmented into several smaller packets.
Each packet is then labeled with its destination or connection ID. In each
network node, packets are queued or buffered, resulting in variable delay
and throughput, depending on the traffic load in the network. This contrasts
with the other principal paradigm, circuit switching, which sets up a specific
circuit with a limited number of constant bit rate and constant delay
connections between nodes for exclusive use during the communication
session.

Packet mode or packet-oriented communication may be utilized


with or without a packet switch, in the latter case directly between two hosts.
Examples of that are point-to-point data links, digital video and audio
broadcasting or a shared physical medium, such as a bus network, ring
network, or hub network.

Packet mode communication is a statistical multiplexing technique,


also known as a dynamic bandwidth allocation method, where a physical

90
communication channel is effectively divided into an arbitrary number of
logical variable bit-rate channels or data streams. Each logical stream
consists of a sequence of packets, which normally are forwarded by a
network node asynchronously in a first-come first-serve fashion.
Alternatively, the packets may be forwarded according to some scheduling
discipline for fair queuing or differentiated and/or guaranteed Quality of
service. In case of a shared physical media, the packets may be delivered
according to some packet-mode multiple access scheme.

In packet switching there are no dedicated circuits. Each circuit in a


packet-switching network carries many different transmissions at the same
time. The only rule is that every data unit sent through a packet-switching
network must have enough information in the header that the nodes in the
network can determine how to route the data unit. This tends to add
overhead to the data unit, but the trade-off is well invested.

One of the important advantage of packet switching is the ability to


route data units over any route, rather than a fixed route. For example, if I
have a lot of data to send, the data will have to be divided into many
different data units. These data units do not have to follow the same route in
a packet-switching network.

The trick is being able to place the data units in the proper order when
they are received. If data units are routed over different paths, it is highly
likely that the first data unit may be received after subsequent data units,
which means the order of transmission is now mixed up. The protocols used
in packet-switching networks have the ability to reassemble the data units
into their proper order.

91
Packet switching is favored over circuit switching for many different
reasons. It is more reliable than circuit switching because if a particular
circuit in the network should fail, the routers in the network simploy route
data units over different circuits, taking a different route altogether. In a
circuit-switched network, this is not possible. If a circuit fails in the middle
of a transmission, the entire connection must be released and a new one
established, which means the conversation must start over again (think of
being disconnected from a telephone call; the whole process of connecting
must be repeated).

Packet switching is not new. The industry recognized the need for a
more efficient way of transmitting data over long-haul networks and
deployed the first X.25 networks in the 1960s. These packet-switching
networks were used by many corporations for years, and many still use them
today. Many corporations are looking toward the Internet and a packet-
switching network using TCP/IP as their WAN solution.

In Packet switching the packets come in several forms. These forms are as
follows:
a) Data Packets : Data packets contain message segments as well as
sequence and routing information.
b) Control Packets : They are the brief messages – transmission requests
and acceptances, acknowledgements of data packet receipts – that
keep traffic flowing smoothly. These control packets initiate and
maintain communication.

92
3.7.3 History of packet switching

The concept of packet switching was first explored by Paul Baran in


the early 1960s, and then independently a few years later by Donald Davies
(Abbate, 2000). Leonard Kleinrock conducted early research in queueing
theory which would be important in packet switching, and published a book
in the related field of digital message switching (without the packets) in
1961; he also later played a leading role in building and management of the
world's first packet switched network, the ARPANET.

Baran developed the concept of packet switching during his research


at the RAND Corporation for the US Air Force into survivable
communications networks, first presented to the Air Force in the summer of
[2]
1961 as briefing B-265 then published as RAND Paper P-2626 in 1962
[1], and then including and expanding somewhat within a series of eleven
papers titled On Distributed Communications in 1964 [2]. Baran's P-2626
paper described a general architecture for a large-scale, distributed,
survivable communications network. The paper focuses on three key ideas:
first, use of a decentralized network with multiple paths between any two
points; and second, dividing complete user messages into what he called
message blocks (later called packets); then third, delivery of these messages
by store and forward switching.

Baran's study made its way to Robert Taylor and J.C.R. Licklider at
the Information Processing Technology Office, both wide-area network
evangelists, and it helped influence Lawrence Roberts to adopt the
technology when Taylor put him in charge of development of the
ARPANET. Baran's packet switching work was similar to the research

93
performed independently by Donald Davies at the National Physical
Laboratory, UK. In 1965, Davies developed the concept of packet-switched
networks and proposed development of a UK wide network. He gave a talk
on the proposal in 1966, after which a person from the Ministry of Defense
told him about Baran's work. Davies met Lawrence Roberts at the 1967
ACM Symposium on Operating System Principles, bringing the two groups
together. Interestingly, Davies had chosen some of the same parameters for
his original network design as Baran, such as a packet size of 1024 bits.
Roberts and the ARPANET team took the name "packet switching" itself
from Davies's work.

3.7.4 Connectionless and connection oriented packet switching

The service actually provided to the user by networks using packet


switching nodes can be either be connectionless (based on datagram
messages), or virtual circuit switching (also known as connection oriented).
Some connectionless protocols are Ethernet, IP, and UDP; connection
oriented packet-switching protocols include X.25, Frame relay,
Asynchronous Transfer Mode (ATM), Multiprotocol Label Switching
(MPLS), and TCP.

In connection oriented networks, each packet is labeled with a


connection ID rather than an address. Address information is only
transferred to each node during a connection set-up phase, when an entry is
added to each switching table in the network nodes.

In connectionless networks, each packet is labeled with a destination


address, and may also be labeled with the sequence number of the packet.

94
This precludes the need for a dedicated path to help the packet find its way
to its destination. Each packet is dispatched and may go via different routes.
At the destination, the original message/data is reassembled in the correct
order, based on the packet sequence number. Thus a virtual connection, also
known as a virtual circuit or byte stream is provided to the end-user by a
transport layer protocol, although intermediate network nodes only provides
a connectionless network layer service.

Multiplex
techniques

Circuit mode
(constant bandwidth)

TDM · FDM · WDM


Polarization multiplexing
Spatial multiplexing (MIMO)

Statistical multiplexing
(variable bandwidth)

Packet mode · Dynamic TDM


FHSS · DSSS · OFDMA

Related topics

Channel access methods


Media Access Control (MAC)

95
3.7.5 Packet switching in networks

Packet switching is used to optimize the use of the channel capacity


available in digital telecommunication networks such as computer networks,
to minimize the transmission latency (i.e. the time it takes for data to pass
across the network), and to increase robustness of communication. The most
well-known use of packet switching is the Internet and local area networks.
The Internet uses the Internet protocol suite over a variety of data link layer
protocols. For example, Ethernet and frame relay are very common. Newer
mobile phone technologies (e.g., GPRS, I-mode) also use packet switching.

X.25 is a notable use of packet switching in that, despite being based


on packet switching methods, it provided virtual circuits to the user. These
virtual circuits carry variable-length packets. In 1978, X.25 was used to
provide the first international and commercial packet switching network, the
International Packet Switched Service (IPSS). Asynchronous Transfer Mode
(ATM) also is a virtual circuit technology, which uses fixed-length cell relay
connection oriented packet switching.

Datagram packet switching is also called connectionless networking


because no connections are established. Technologies such as Multiprotocol
Label Switching (MPLS) and the Resource Reservation Protocol (RSVP)
create virtual circuits on top of datagram networks. Virtual circuits are
especially useful in building robust failover mechanisms and allocating
bandwidth for delay-sensitive applications. MPLS and its predecessors, as
well as ATM, have been called "fast packet" technologies. MPLS, indeed,
[1]
has been called "ATM without cells" . Modern routers, however, do not

96
require these technologies to be able to forward variable-length packets at
multigigabit speeds across the network.

97
Chapter Three
Data Networks
End Chapter Quizzes

1. In circuit switching
A Data is stored at intermediate nodes
B Transmission path is set before data transfer
C Dedicated communication link is not required
D None of the above

2. When the time to establish link is large and the size of data is
small, the preferred mode of data transfer is
A Circuit switching
B Packet switching
C Time division multiplexing
D None of the above

3. Which of the following is associated with network download,


A node
B Star topology
C Both
D None of the above

4. Network topology, consisting of nodes attached in a ring without a


host computer is known as
A Star

98
B Ring
C Bus
D None

5. Which of the topology is not broadcast type?


A Star
B Ring
C Bus
D None

6. Which of the following topologies is highly reliable?


A Star
B Ring
C Fully connected mesh
D None of the above

7. Typical data transfer rates in LAN are of the order of


A Bits per second
B Mega bits per second
C Kilo bits per second
D None of the above

8. Ethernet uses
A Bus
B Mesh
C Ring
D None of the above

99
9. Wide area network always require
A High band width communication links
B High speed processors
C Same type of processors
D None of the above

10. An internet is easy to use because it


A Uses the same software as the world wide web
B Is just another name for a LAN
C Uses the internet
D All of the above

100
CHAPTER FOUR

INTERNET & INTERNET PROTOCOLS

4.1 Introduction

Visualization of the various routes through a portion of the Internet.

The Internet is a global system of interconnected computer networks


that interchange data by packet switching using the standardized Internet
Protocol Suite (TCP/IP). It is a "network of networks" that consists of
millions of private and public, academic, business, and government
networks of local to global scope that are linked by copper wires, fiber-optic
cables, wireless connections, and other technologies.

101
The Internet carries various information resources and services, such
as electronic mail, online chat, file transfer and file sharing, online gaming,
and the inter-linked hypertext documents and other resources of the World
Wide Web (WWW).

4.2 Terminology

The terms Internet and World Wide Web are often used in every-day
speech without much distinction. However, the Internet and the World Wide
Web are not one and the same. The Internet is a global data communications
system. It is a hardware and software infrastructure that provides
connectivity between computers. In contrast, the Web is one of the services
communicated via the Internet. It is a collection of interconnected
documents and other resources, linked by hyperlinks and URLs. The term
internet is written both with capital and without capital, and is used both
with and without the definite article.

4.3 History of Internet

The USSR's launch of Sputnik spurred the United States to create the
Advanced Research Projects Agency, known as ARPA, in February 1958 to
regain a technological lead. ARPA created the Information Processing
Technology Office (IPTO) to further the research of the Semi Automatic
Ground Environment (SAGE) program, which had networked country-wide
radar systems together for the first time. J. C. R. Licklider was selected to
head the IPTO, and networking as a potential unifying human revolution.

102
Licklider moved from the Psycho-Acoustic Laboratory at Harvard
University to MIT in 1950, after becoming interested in information
technology. At MIT, he served on a committee that established Lincoln
Laboratory and worked on the SAGE project. In 1957 he became a Vice
President at BBN, where he bought the first production PDP-1 computer
and conducted the first public demonstration of time-sharing. At the IPTO,
Licklider got Lawrence Roberts to start a project to make a network, and
Roberts based the technology on the work of Paul Baran, who had written an
exhaustive study for the U.S. Air Force that recommended packet switching
(as opposed to circuit switching) to make a network highly robust and
survivable. After much work, the first two nodes of what would become the
ARPANET were interconnected between UCLA and SRI (later SRI
International) in Menlo Park, California, on October 29, 1969. The
ARPANET was one of the "eve" networks of today's Internet.

Following on from the demonstration that packet switching worked on


the ARPANET, the British Post Office, Telenet, DATAPAC and
TRANSPAC collaborated to create the first international packet-switched
network service. In the UK, this was referred to as the International Packet
Switched Service (IPSS), in 1978. The collection of X.25-based networks
grew from Europe and the US to cover Canada, Hong Kong and Australia by
1981. The X.25 packet switching standard was developed in the CCITT
(now called ITU-T) around 1976.

X.25 was independent of the TCP/IP protocols that arose from the
experimental work of DARPA on the ARPANET, Packet Radio Net and
Packet Satellite Net during the same time period. Vinton Cerf and Robert
Kahn developed the first description of the TCP protocols during 1973 and

103
published a paper on the subject in May 1974. Use of the term "Internet" to
describe a single global TCP/IP network originated in December 1974 with
the publication of RFC 675, the first full specification of TCP that was
written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford
University. During the next nine years, work proceeded to refine the
protocols and to implement them on a wide range of operating systems.

The first TCP/IP-based wide-area network was operational by January


1, 1983 when all hosts on the ARPANET were switched over from the older
NCP protocols. In 1985, the United States' National Science Foundation
(NSF) commissioned the construction of the NSFNET, a university 56
kilobit/second network backbone using computers called "fuzzballs" by their
inventor, David L. Mills. The following year, NSF sponsored the conversion
to a higher-speed 1.5 megabit/second network. A key decision to use the
DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of
the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988.


The US Federal Networking Council approved the interconnection of the
NSFNET to the commercial MCI Mail system in that year and the link was
made in the summer of 1989. Other commercial electronic e-mail services
were soon connected, including OnTyme, Telemail and Compuserve. In that
same year, three commercial Internet service providers (ISP) were created:
UUNET, PSINet and CERFNET. Important, separate networks that offered
gateways into, then later merged with, the Internet include Usenet and
BITNET. Various other commercial and educational networks, such as
Telenet, Tymnet, Compuserve and JANET were interconnected with the
growing Internet. Telenet (later called Sprintnet) was a large privately

104
funded national computer network with free dial-up access in cities
throughout the U.S. that had been in operation since the 1970s. This network
was eventually interconnected with the others in the 1980s as the TCP/IP
protocol became increasingly popular. The ability of TCP/IP to work over
virtually any pre-existing communication networks allowed for a great ease
of growth, although the rapid growth of the Internet was due primarily to the
availability of commercial routers from companies such as Cisco Systems,
Proteon and Juniper, the availability of commercial Ethernet equipment for
local-area networking, and the widespread implementation of TCP/IP on the
UNIX operating system.

4.4 Growth of internet

Although the basic applications and guidelines that make the Internet
possible had existed for almost two decades, the network did not gain a
public face until the 1990s. On 6 August 1991, CERN, a pan european
organisation for particle research, publicized the new World Wide Web
project. The Web was invented by English scientist Tim Berners-Lee in
1989. An early popular web browser was ViolaWWW, patterned after
HyperCard and built using the X Window System. It was eventually
replaced in popularity by the Mosaic web browser. In 1993, the National
Center for Supercomputing Applications at the University of Illinois
released version 1.0 of Mosaic, and by late 1994 there was growing public
interest in the previously academic, technical Internet. By 1996 usage of the
word Internet had become commonplace, and consequently, so had its use as
a synecdoche in reference to the World Wide Web.

105
Meanwhile, over the course of the decade, the Internet successfully
accommodated the majority of previously existing public computer networks
(although some networks, such as FidoNet, have remained separate). During
the 1990s, it was estimated that the Internet grew by 100% per year, with a
brief period of explosive growth in 1996 and 1997. This growth is often
attributed to the lack of central administration, which allows organic growth
of the network, as well as the non-proprietary open nature of the Internet
protocols, which encourages vendor interoperability and prevents any one
company from exerting too much control over the network. comScore, a
global Internet information provider, reported that the number of unique
users reached the 1 billion mark in December 2008. Based on the research
figures, Asia Pacific countries had the most internet users accounting for
41 percent, followed by Europe (28 percent), North America (18 percent),
Latin America (7 percent) and last was Middle East & Africa (together
5 percent). China topped the country-wise list with 17.8 percent of the global
internet audience. Of the various group of websites, Google websites were
visited by 778 million unique visitors topping the list. They were followed
by Microsoft, Yahoo, AOL websites along with Wikipedia and sister project
sites.

However, the actual number of internet users might have reached 1


billion mark earlier as comScore study did not include users aged below 15.
Also, the internet data from public computers such as Internet cafes or
accessing internet via mobile phones or other personal gadgets were not
included. The study was based on users accessing internet from home or
work computers.

106
4.5 Today's Internet

The My Opera Community server rack. From the top, user file storage
(content of files.myopera.com), "bigma" (the master MySQL database
server), and two IBM blade centers containing multi-purpose machines
(Apache front ends, Apache back ends, slave MySQL database servers, load
balancers, file servers, cache servers and sync masters).

Aside from the complex physical connections that make up its


infrastructure, the Internet is facilitated by bi- or multi-lateral commercial
contracts (e.g., peering agreements), and by technical specifications or
protocols that describe how to exchange data over the network. Indeed, the
Internet is defined by its interconnections and routing policies. As of June
30, 2008, 1.463 billion people use the Internet according to Internet World
Stats.

107
4.6 Internet structure

There have been many analyses of the Internet and its structure. For
example, it has been determined that the Internet IP routing structure and
hypertext links of the World Wide Web are examples of scale-free networks.

Similar to the way the commercial Internet providers connect via


Internet exchange points, research networks tend to interconnect into large
subnetworks such as the following:

GEANT
GLORIAD
The Internet2 Network (formally known as the Abilene Network)
JANET (the UK's national research and education network)

These in turn are built around relatively smaller networks. See also the
list of academic computer network organizations. In computer network
diagrams, the Internet is often represented by a cloud symbol, into and out of
which network communications can pass.

ICANN

ICANN headquarters in Marina Del Rey, California, United States.

108
The Internet Corporation for Assigned Names and Numbers (ICANN)
is the authority that coordinates the assignment of unique identifiers on the
Internet, including domain names, Internet Protocol (IP) addresses, and
protocol port and parameter numbers. A globally unified namespace (i.e., a
system of names in which there is at most one holder for each possible
name) is essential for the Internet to function. ICANN is headquartered in
Marina del Rey, California, but is overseen by an international board of
directors drawn from across the Internet technical, business, academic, and
non-commercial communities. The US government continues to have the
primary role in approving changes to the root zone file that lies at the heart
of the domain name system. Because the Internet is a distributed network
comprising many voluntarily interconnected networks, the Internet has no
governing body. ICANN's role in coordinating the assignment of unique
identifiers distinguishes it as perhaps the only central coordinating body on
the global Internet, but the scope of its authority extends only to the
Internet's systems of domain names, IP addresses, protocol ports and
parameter numbers. On November 16, 2005, the World Summit on the
Information Society, held in Tunis, established the Internet Governance
Forum (IGF) to discuss Internet-related issues.

4.7 Language used on internet

The prevalent language for communication on the Internet is English.


This may be a result of the Internet's origins, as well as English's role as a
lingua franca. It may also be related to the poor capability of early
computers, largely originating in the United States, to handle characters
other than those in the English variant of the Latin alphabet.

109
After English (29% of Web visitors) the most requested languages on
the World Wide Web are Chinese (19%), Spanish (9%), Japanese (6%),
French (5%) and German (4%). By region, 40% of the world's Internet users
are based in Asia, 26% in Europe, 17% in North America, 10% in Latin
America and the Caribbean, 4% in Africa, 3% in the Middle East and 1% in
Australia.

The Internet's technologies have developed enough in recent years,


especially in the use of Unicode, that good facilities are available for
development and communication in most widely used languages. However,
some glitches such as mojibake (incorrect display of foreign language
characters, also known as kryakozyabry) still remain.

Internet and the workplace

The Internet is allowing greater flexibility in working hours and


location, especially with the spread of unmetered high-speed connections
and Web applications.

The Internet viewed on mobile devices

The Internet can now be accessed virtually anywhere by numerous


means. Mobile phones, datacards, handheld game consoles and cellular
routers allow users to connect to the Internet from anywhere there is a
cellular network supporting that device's technology.

Within the limitations imposed by the small screen and other limited
facilities of such a pocket-sized device, all the services of the Internet,
including email and web browsing, may be available in this way. Service

110
providers may restrict the range of these services and charges for data access
may be significant, compared to home usage.

4.8 Uses of internet

Following are the different uses of internet:

4.8.1 Why do people use internet : People use the internet due
to following reasons:

To find general information about a subject

The Web is like a huge encyclopedia of information - in some ways


it's even better. The volume of information you'll find on the Web is
amazing. For every topic that you've ever wondered about, there's
bound to be someone who's written a Web page about it. The Web
offers many different perspectives on a single topic. For example, here
is a selection of pages about Genetic Engineering.
In fact you can even find online encyclopedias. Many of these are
now offering a subscription service which lets you search through the
complete text of the encyclopedia. There are also many free
ecyclopedias that may give you a cut-down version of what you
would find in a complete encyclopedia.

To access information not easily available elsewhere

One of the great things about the Web is that it puts information into
your hands that you might otherwise have to pay for or find out by
less convenient means.

111
To correspond with faraway friends

Email offers a cheap and easy alternative to traditional methods of


correspondence. It's faster and easier than writing snail mail and cheaper
than using the telephone. Of course, there are disadvantages too. It's not as
personal as a handwritten letter - and not as reliable either. If you spell the
name of the street wrong in a conventional address, it's not too difficult for
the post office to work out what you mean. However if you spell anything
wrong in an email address, your mail won't be delivered (you might get it
sent back to you or you might never realise).

To meet people

The Web is generally a very friendly place. People love getting email
from strangers, and friendships are quick to form from casual
correspondence. The "impersonal" aspect of email tends to encourage people
to reveal surprisingly personal things about themselves. When you know you
will never have to meet someone face-to-face, you may find it easier to tell
them your darkest secrets. Cyber-friendships have often developed into real
life ones too. Many people have even found love on the Net, and have gone
on to marry their cyber-partner.

To discuss their interests with like minded people

Did you think you were alone in your obsession with a singer, TV
programme, author, hobby? Chances are there's and Internet group for
people like you, discussing every little detail of your obsession right now.

112
To have fun

There's no doubt that the Internet is a fun place to be. There's plenty to
keep you occupied on a rainy day.

To learn

Online distance education courses can give you an opportunity to gain a


qualification over the Internet.

To read the news


To find software

The Internet contains a wealth of useful downloadable shareware.


Some pieces of shareware are limited versions of the full piece of
software, other are time limited trials (you should pay once the time limit
is up). Other shareware is free for educational institutes, or for non-
commercial purposes.

To buy things

The security of on-line shopping is still questionable, but as long as


you are dealing with a reputable company or Web Site the risks are minimal.

4.8.2 Why do people put things on the Web?

To advertise a product

Most company Web sites start up as a big advertisement for their


products and services. It may be hard to see why anyone would willingly

113
visit a 10 page ad - but these advertisements are very useful to anyone
genuinely interested in finding out about their products. Companies may also
give away some information for free as an incentive for people to visit their
pages.

To sell a product

Internet shopping (e-commerce) is still in its infancy - it takes a very


good marketing strategy to actually make money out of selling items
over the Web, but that doesn't stop lots of people from trying.

To make money

A popular way to make money out of the Web is from advertising


revenue. Popular sites have banners at the top of the page enticing
people to click them and be taken to the advertiser's Web site. These
banners are generally animated and very appealing, with mysterious
messages to make users wonder where they will be taken. For each
person that clicks the ad, the host site gets commission. Making
money this way is only successful if the site gets lots of visitors
(thousands a day); so the sites must be very useful and offer
something of real value to their visitors.

To share their knowledge with the world

Many individuals write Web pages to share information about their


interests or hobbies. They don't expect to make any money out of it -
they just feel that the Web has given them so much information that
the least they can do is put something into it that may be useful for

114
others. Other rewards come from the prestige of having their site
recognised as something good and the contact inspired by their pages
with others sharing the same interest.

4.8.3 E-mail

Electronic mail, often abbreviated as e-mail, email, or eMail, is any


method of creating, transmitting, or storing primarily text-based human
communications with digital communications systems. Historically, a
variety of electronic mail system designs evolved that were often
incompatible or not interoperable. With the proliferation of the Internet since
the early 1980s, however, the standardization efforts of Internet architects
succeeded in promulgating a single standard based on the Simple Mail
Transfer Protocol (SMTP), first published as Internet Standard 10 (RFC 821)
in 1982.

Modern e-mail systems are based on a store-and-forward model in


which e-mail computer server systems, accept, forward, or store messages
on behalf of users, who only connect to the e-mail infrastructure with their
personal computer or other network-enabled device for the duration of
message transmission or retrieval to or from their designated server. Rarely
is e-mail transmitted directly from one user's device to another's.

While, originally, e-mail consisted only of text messages composed in


the ASCII character set, virtually any media format can be sent today,
including attachments of audio and video clips.

The concept of sending electronic text messages between parties in a


way analogous to mailing letters or memos predates the creation of the

115
Internet. Even today it can be important to distinguish between Internet and
internal e-mail systems. Internet e-mail may travel and be stored
unencrypted on many other networks and machines out of both the sender's
and the recipient's control. During this time it is quite possible for the
content to be read and even tampered with by third parties, if anyone
considers it important enough. Purely internal or intranet mail systems,
where the information never leaves the corporate or organization's network,
are much more secure, although in any organization there will be IT and
other personnel whose job may involve monitoring, and occasionally
accessing, the e-mail of other employees not addressed to them. Today you
can send pictures and attach files on e-mail. Most e-mail servers today also
feature the ability to send e-mail to multiple e-mail addresses.

E-Mail or Electronic-mail is a means or system of transmitting


messages electronically. It is any method of creating, transmitting, or
storing. E-mails are what allow people to keep in touch with other people in
an easy and fast way because of the invention of the Internet. It is a quick
way to send a message or document or eve a file or just anything to someone
or the other. Not only can the email be sent to one person but it can also be
sent to as many people as it is needed to be sent to. If one receives an email
that they need to send to others they are able to forward the message to
anyone they chose to There are many different types of websites such as
Google, yahoo, msn, etc. to create email accounts on. Creating an account in
any of these still allows for one to communicate with anyone from any of the
different types of accounts. All email accounts created are free and anyone is
allowed to create one. E-mails are very secure and private between the

116
sender and recipient and allow for information to be very easily sent and
replied to.

4.8.4 The World Wide Web

WWW's historic logo designed by Robert Cailliau

The World Wide Web (commonly abbreviated as "the Web") is a


system of interlinked hypertext documents accessed via the Internet. With a
Web browser, one can view Web pages that may contain text, images,
videos, and other multimedia and navigate between them using hyperlinks.
Using concepts from earlier hypertext systems, the World Wide Web was
begun in 1992 by MIT professor Tim Berners-Lee and Robert Cailliau, a
Belgian computer scientist, both working at the European Organization for
Nuclear Research (CERN) in Geneva, Switzerland. In 1990, they proposed
building a "web of nodes" storing "hypertext pages" viewed by "browsers"
on a network, and released that web in 1992. Connected by the existing
Internet, other websites were created, around the world, adding international
standards for domain names & the HTML language. Since then, Berners-Lee
has played an active role in guiding the development of Web standards (such
as the markup languages in which Web pages are composed), and in recent

117
years has advocated his vision of a Semantic Web. Cailliau went on early
retirement in January 2005 and left CERN in January 2007.

The World Wide Web enabled the spread of information over the
Internet through an easy-to-use and flexible format. It thus played an
important role in popularising use of the Internet, to the extent that the
World Wide Web has become a synonym for Internet, with the two being
conflated in popular use.

Graphic representation of a minute fraction of the WWW,


demonstrating hyperlinks. Many people use the terms Internet and World
Wide Web (or just the Web) interchangeably, but, as discussed above, the
two terms are not synonymous. The World Wide Web is a huge set of
interlinked documents, images and other resources, linked by hyperlinks and
URLs. These hyperlinks and URLs allow the web servers and other
machines that store originals, and cached copies of, these resources to
deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP
is only one of the communication protocols used on the Internet. Web

118
services also use HTTP to allow software systems to communicate in order
to share and exchange business logic and data.

Software products that can access the resources of the Web are
correctly termed user agents. In normal use, web browsers, such as Internet
Explorer, Firefox and Apple Safari, access web pages and allow users to
navigate from one to another via hyperlinks. Web documents may contain
almost any combination of computer data including graphics, sounds, text,
video, multimedia and interactive content including games, office
applications and scientific demonstrations. Through keyword-driven Internet
research using search engines like Yahoo! and Google, millions of people
worldwide have easy, instant access to a vast and diverse amount of online
information. Compared to encyclopedias and traditional libraries, the World
Wide Web has enabled a sudden and extreme decentralization of
information and data.

Using the Web, it is also easier than ever before for individuals and
organisations to publish ideas and information to an extremely large
audience. Anyone can find ways to publish a web page, a blog or build a
website for very little initial cost. Publishing and maintaining large,
professional websites full of attractive, diverse and up-to-date information is
still a difficult and expensive proposition, however. Many individuals and
some companies and groups use "web logs" or blogs, which are largely used
as easily updatable online diaries. Some commercial organisations
encourage staff to fill them with advice on their areas of specialization in the
hope that visitors will be impressed by the expert knowledge and free
information, and be attracted to the corporation as a result. One example of
this practice is Microsoft, whose product developers publish their personal

119
blogs in order to pique the public's interest in their work. Collections of
personal web pages published by large service providers remain popular, and
have become increasingly sophisticated. Whereas operations such as
Angelfire and GeoCities have existed since the early days of the Web, newer
offerings from, for example, Facebook and MySpace currently have large
followings. These operations often brand themselves as social network
services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce


or the sale of products and services directly via the Web continues to grow.
In the early days, web pages were usually created as sets of complete and
isolated HTML text files stored on a web server. More recently, websites are
more often created using content management or wiki software with,
initially, very little content. Contributors to these systems, who may be paid
staff, members of a club or other organisation or members of the public, fill
underlying databases with content using editing pages designed for that
purpose, while casual visitors view and read this content in its final HTML
form. There may or may not be editorial, approval and security systems built
into the process of taking newly entered content and making it available to
the target visitors.

4.8.5 Remote access

The Internet allows computer users to connect to other computers and


information stores easily, wherever they may be across the world. They may
do this with or without the use of security, authentication and encryption
technologies, depending on the requirements. This is encouraging new ways
of working from home, collaboration and information sharing in many

120
industries. An accountant sitting at home can audit the books of a company
based in another country, on a server situated in a third country that is
remotely maintained by IT specialists in a fourth. These accounts could have
been created by home-working bookkeepers, in other remote locations,
based on information e-mailed to them from offices all over the world. Some
of these things were possible before the widespread use of the Internet, but
the cost of private leased lines would have made many of them infeasible in
practice.

An office worker away from his desk, perhaps on the other side of the
world on a business trip or a holiday, can open a remote desktop session into
his normal office PC using a secure Virtual Private Network (VPN)
connection via the Internet. This gives the worker complete access to all of
his or her normal files and data, including e-mail and other applications,
while away from the office. This concept is also referred to by some network
security people as the Virtual Private Nightmare, because it extends the
secure perimeter of a corporate network into its employees' homes.

4.8.6 Collaboration

The low cost and nearly instantaneous sharing of ideas, knowledge,


and skills has made collaborative work dramatically easier. Not only can a
group cheaply communicate and share ideas, but the wide reach of the
Internet allows such groups to easily form in the first place. An example of
this is the free software movement, which has produced Linux, Mozilla
Firefox, OpenOffice.org etc.

121
Internet "chat", whether in the form of IRC chat rooms or channels, or
via instant messaging systems, allow colleagues to stay in touch in a very
convenient way when working at their computers during the day. Messages
can be exchanged even more quickly and conveniently than via e-mail.
Extensions to these systems may allow files to be exchanged, "whiteboard"
drawings to be shared or voice and video contact between team members.
Version control systems allow collaborating teams to work on shared sets of
documents without either accidentally overwriting each other's work or
having members wait until they get "sent" documents to be able to make
their contributions.

Business and project teams can share calendars as well as documents


and other information. Such collaboration occurs in a wide variety of areas
including scientific research, software development, conference planning,
political activism and creative writing.

4.8.7 File sharing

A computer file can be e-mailed to customers, colleagues and friends


as an attachment. It can be uploaded to a website or FTP server for easy
download by others. It can be put into a "shared location" or onto a file
server for instant use by colleagues. The load of bulk downloads to many
users can be eased by the use of "mirror" servers or peer-to-peer networks.
In any of these cases, access to the file may be controlled by user
authentication, the transit of the file over the Internet may be obscured by
encryption, and money may change hands for access to the file. The price
can be paid by the remote charging of funds from, for example, a credit card
whose details are also passed—hopefully fully encrypted—across the

122
Internet. The origin and authenticity of the file received may be checked by
digital signatures or by MD5 or other message digests.

These simple features of the Internet, over a worldwide basis, are


changing the production, sale, and distribution of anything that can be
reduced to a computer file for transmission. This includes all manner of print
publications, software products, news, music, film, video, photography,
graphics and the other arts. This in turn has caused seismic shifts in each of
the existing industries that previously controlled the production and
distribution of these products.

4.8.8 Streaming media

Many existing radio and television broadcasters provide Internet


"feeds" of their live audio and video streams (for example, the BBC). They
may also allow time-shift viewing or listening such as Preview, Classic
Clips and Listen Again features. These providers have been joined by a
range of pure Internet "broadcasters" who never had on-air licenses. This
means that an Internet-connected device, such as a computer or something
more specific, can be used to access on-line media in much the same way as
was previously possible only with a television or radio receiver. The range
of material is much wider, from pornography to highly specialized, technical
webcasts. Podcasting is a variation on this theme, where—usually audio—
material is downloaded and played back on a computer or shifted to a
portable media player to be listened to on the move. These techniques using
simple equipment allow anybody, with little censorship or licensing control,
to broadcast audio-visual material on a worldwide basis.

123
Webcams can be seen as an even lower-budget extension of this
phenomenon. While some webcams can give full-frame-rate video, the
picture is usually either small or updates slowly. Internet users can watch
animals around an African waterhole, ships in the Panama Canal, traffic at a
local roundabout or monitor their own premises, live and in real time. Video
chat rooms and video conferencing are also popular with many uses being
found for personal webcams, with and without two-way sound. YouTube
was founded on 15 February 2005 and is now the leading website for free
streaming video with a vast number of users. It uses a flash-based web
player to stream and show the video files. Users are able to watch videos
without signing up; however, if they do sign up, they are able to upload an
unlimited amount of videos and build their own personal profile. YouTube
claims that its users watch hundreds of millions, and upload hundreds of
thousands, of videos daily.

4.8.9 Internet Telephony (VoIP)

VoIP stands for Voice-over-Internet Protocol, referring to the protocol


that underlies all Internet communication. The idea began in the early 1990s
with walkie-talkie-like voice applications for personal computers. In recent
years many VoIP systems have become as easy to use and as convenient as a
normal telephone. The benefit is that, as the Internet carries the voice traffic,
VoIP can be free or cost much less than a traditional telephone call,
especially over long distances and especially for those with always-on
Internet connections such as cable or ADSL.

VoIP is maturing into a competitive alternative to traditional


telephone service. Interoperability between different providers has improved

124
and the ability to call or receive a call from a traditional telephone is
available. Simple, inexpensive VoIP network adapter are available that
eliminate the need for a personal computer. Voice quality can still vary from
call to call but is often equal to and can even exceed that of traditional calls.
Remaining problems for VoIP include emergency telephone number dialling
and reliability. Currently, a few VoIP providers provide an emergency
service, but it is not universally available. Traditional phones are line-
powered and operate during a power failure; VoIP does not do so without a
backup power source for the phone equipment and the Internet access
devices.

VoIP has also become increasingly popular for gaming applications,


as a form of communication between players. Popular VoIP clients for
gaming include Ventrilo and Teamspeak, and others. PlayStation 3 and
Xbox 360 also offer VoIP chat features.

4.8.10 Political organization and censorship

In democratic societies, the Internet has achieved new relevance as a


political tool. The presidential campaign of Howard Dean in 2004 in the
United States became famous for its ability to generate donations via the
Internet. Many political groups use the Internet to achieve a whole new
method of organizing, in order to carry out Internet activism.

Some governments, such as those of Iran, North Korea, Myanmar, the


People's Republic of China, and Saudi Arabia, restrict what people in their
countries can access on the Internet, especially political and religious
content. This is accomplished through software that filters domains and

125
content so that they may not be easily accessed or obtained without elaborate
circumvention. In Norway, Denmark, Finland and Sweden, major Internet
service providers have voluntarily (possibly to avoid such an arrangement
being turned into law) agreed to restrict access to sites listed by police.
While this list of forbidden URLs is only supposed to contain addresses of
known child pornography sites, the content of the list is secret.

Many countries, including the United States, have enacted laws


making the possession or distribution of certain material, such as child
pornography, illegal, but do not use filtering software. There are many free
and commercially available software programs with which a user can choose
to block offensive websites on individual computers or networks, such as to
limit a child's access to pornography or violence. See Content-control
software.

4.8.11 Leisure activities

The Internet has been a major source of leisure since before the World
Wide Web, with entertaining social experiments such as MUDs and MOOs
being conducted on university servers, and humor-related Usenet groups
receiving much of the main traffic. Today, many Internet forums have
sections devoted to games and funny videos; short cartoons in the form of
Flash movies are also popular. Over 6 million people use blogs or message
boards as a means of communication and for the sharing of ideas.

The pornography and gambling industries have both taken full


advantage of the World Wide Web, and often provide a significant source of
advertising revenue for other websites. Although many governments have

126
attempted to put restrictions on both industries' use of the Internet, this has
generally failed to stop their widespread popularity. One main area of leisure
on the Internet is multiplayer gaming. This form of leisure creates
communities, bringing people of all ages and origins to enjoy the fast-paced
world of multiplayer games. These range from MMORPG to first-person
shooters, from role-playing games to online gambling. This has
revolutionized the way many people interact and spend their free time on the
Internet. While online gaming has been around since the 1970s, modern
modes of online gaming began with services such as GameSpy and MPlayer,
to which players of games would typically subscribe. Non-subscribers were
limited to certain types of gameplay or certain games.

Many use the Internet to access and download music, movies and
other works for their enjoyment and relaxation. As discussed above, there
are paid and unpaid sources for all of these, using centralized servers and
distributed peer-to-peer technologies. Some of these sources take more care
over the original artists' rights and over copyright laws than others. Many
use the World Wide Web to access news, weather and sports reports, to plan
and book holidays and to find out more about their random ideas and casual
interests.

People use chat, messaging and e-mail to make and stay in touch with
friends worldwide, sometimes in the same way as some previously had pen
pals. Social networking websites like MySpace, Facebook and many others
like them also put and keep people in contact for their enjoyment. The
Internet has seen a growing number of Web desktops, where users can
access their files, folders, and settings via the Internet. Cyberslacking has
become a serious drain on corporate resources; the average UK employee

127
spends 57 minutes a day surfing the Web at work, according to a study by
Peninsula Business Services.

4.8.12 Marketing

The Internet has also become a large market for companies; some of
the biggest companies today have grown by taking advantage of the efficient
nature of low-cost advertising and commerce through the Internet, also
known as e-commerce. It is the fastest way to spread information to a vast
number of people simultaneously. The Internet has also subsequently
revolutionized shopping—for example; a person can order a CD online and
receive it in the mail within a couple of days, or download it directly in some
cases. The Internet has also greatly facilitated personalized marketing which
allows a company to market a product to a specific person or a specific
group of people more so than any other advertising medium.

Examples of personalized marketing include online communities such


as MySpace, Friendster, Orkut, Facebook and others which thousands of
Internet users join to advertise themselves and make friends online. Many of
these users are young teens and adolescents ranging from 13 to 25 years old.
In turn, when they advertise themselves they advertise interests and hobbies,
which online marketing companies can use as information as to what those
users will purchase online, and advertise their own companies' products to
those users.

128
4.9 Internet access

Common methods of home access include dial-up, landline broadband


(over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G
technology cell phones. Public places to use the Internet include libraries
and Internet cafes, where computers with Internet connections are available.
There are also Internet access points in many public places such as airport
halls and coffee shops, in some cases just for brief use while standing.
Various terms are used, such as "public Internet kiosk", "public access
terminal", and "Web payphone". Many hotels now also have public
terminals, though these are usually fee-based. These terminals are widely
accessed for various usage like ticket booking, bank deposit, online payment
etc. Wi-Fi provides wireless access to computer networks, and therefore can
do so to the Internet itself. Hotspots providing such access include Wi-Fi
cafes, where would-be users need to bring their own wireless-enabled
devices such as a laptop or PDA. These services may be free to all, free to
customers only, or fee-based. A hotspot need not be limited to a confined
location. A whole campus or park, or even an entire city can be enabled.
Grassroots efforts have led to wireless community networks. Commercial
Wi-Fi services covering large city areas are in place in London, Vienna,
Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet
can then be accessed from such places as a park bench.

Apart from Wi-Fi, there have been experiments with proprietary


mobile wireless networks like Ricochet, various high-speed data services
over cellular phone networks, and fixed wireless services. High-end mobile
phones such as smartphones generally come with Internet access through the

129
phone network. Web browsers such as Opera are available on these
advanced handsets, which can also run a wide variety of other Internet
software. More mobile phones have Internet access than PCs, though this is
not as widely used. An Internet access provider and protocol matrix
differentiates the methods used to get online.

4.10 Social impact of internet

Chris Young was voted into the 2007 Major League Baseball All-Star Game
on the internet via the All-Star Final Vote.

The Internet has made possible entirely new forms of social


interaction, activities and organizing, thanks to its basic features such as
widespread usability and access. Social networking websites such as
Facebook and MySpace have created a new form of socialization and
interaction. Users of these sites are able to add a wide variety of items to
their personal pages, to indicate common interests, and to connect with

130
others. It is also possible to find a large circle of existing acquaintances,
especially if a site allows users to utilize their real names, and to allow
communication among large existing groups of people.

Sites like meetup.com exist to allow wider announcement of groups


which may exist mainly for face-to-face meetings, but which may have a
variety of minor interactions over their group's site at meetup.org, or other
similar sites.

4.11 Complex architecture

Many computer scientists see the Internet as a "prime example of a


large-scale, highly engineered, yet highly complex system". The Internet is
extremely heterogeneous. (For instance, data transfer rates and physical
characteristics of connections vary widely.) The Internet exhibits "emergent
phenomena" that depend on its large-scale organization. For example, data
transfer rates exhibit temporal self-similarity. Further adding to the
complexity of the Internet is the ability of more than one computer to use the
Internet through only one node, thus creating the possibility for a very deep
and hierarchal sub-network that can theoretically be extended infinitely
(disregarding the programmatic limitations of the IPv4 protocol). However,
since principles of this architecture date back to the 1960s, it might not be a
solution best suited to modern needs, and thus the possibility of developing
alternative structures is currently being looked into.

According to a June 2007 article in Discover magazine, the combined


weight of all the electrons moved within the Internet in a day is 0.2

131
millionths of an ounce. Others have estimated this at nearer 2 ounces (50
grams).

4.12 Internet Protocol

The complex communications infrastructure of the Internet consists of


its hardware components and a system of software layers that control various
aspects of the architecture. While the hardware can often be used to support
other software systems, it is the design and the rigorous standardization
process of the software architecture that characterizes the Internet.

The responsibility for the architectural design of the Internet software


systems has been delegated to the Internet Engineering Task Force (IETF).
The IETF conducts standard-setting work groups, open to any individual,
about the various aspects of Internet architecture. Resulting discussions and
final standards are published in Requests for Comments (RFCs), freely
available on the IETF web site.

The principal methods of networking that enable the Internet are


contained in a series of RFCs that constitute the Internet Standards. These
standards describe a system known as the Internet Protocol Suite. This is a
model architecture that divides methods into a layered system of protocols
(RFC 1122, RFC 1123). The layers correspond to the environment or scope
in which their services operate. At the top is the space (Application Layer)
of the software application, e.g., a web browser application, and just below it
is the Transport Layer which connects applications on different hosts via the
network (e.g., client-server model). The underlying network consists of two
layers: the Internet Layer which enables computers to connect to one-

132
another via intermediate (transit) networks and thus is the layer that
establishes internetworking and the Internet, and lastly, at the bottom, is a
software layer that provides connectivity between hosts on the same local
link (therefore called Link Layer), e.g., a local area network (LAN) or a dial-
up connection. This model is also known as the TCP/IP model of
networking. While other models have been developed, such as the Open
Systems Interconnection (OSI) model, they are not compatible in the details
of description, nor implementation.

The most prominent component of the Internet model is the Internet


Protocol (IP) which provides addressing systems for computers on the
Internet and facilitates the internetworking of networks. IP Version 4 (IPv4)
is the initial version used on the first generation of the today's Internet and is
still in dominant use. It was designed to address up to ~4.3 billion (10 9)
Internet hosts. However, the explosive growth of the Internet has led to IPv4
address exhaustion. A new protocol version, IPv6, was developed which
provides vastly larger addressing capabilities and more efficient routing of
data traffic. IPv6 is currently in commercial deployment phase around the
world. IPv6 is not interoperable with IPv4. It essentially establishes a
"parallel" version of the Internet not accessible with IPv4 software. This
means software upgrades are necessary for every networking device that
needs to communicate on the IPv6 Internet. Most modern computer
operating systems are already converted to operate with both versions of the
Internet Protocol. Network infrastructures, however, are still lagging in this
development.

The Internet Protocol (IP) is a protocol used for communicating data


across a packet-switched internetwork using the Internet Protocol Suite, also

133
referred to as TCP/IP. IP is the primary protocol in the Internet Layer of the
Internet Protocol Suite and has the task of delivering distinguished protocol
datagrams (packets) from the source host to the destination host solely based
on their addresses. For this purpose the Internet Protocol defines addressing
methods and structures for datagram encapsulation. The first major version
of addressing structure, now referred to as Internet Protocol Version 4 (IPv4)
is still the dominant protocol of the Internet, although the successor, Internet
Protocol Version 6 (IPv6) is being actively deployed worldwide.

4.12.1 Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set


of communications protocols used for the Internet and other similar
networks. It is named from two of the most important protocols in it: the
Transmission Control Protocol (TCP) and the Internet Protocol (IP), which
were the first two networking protocols defined in this standard. Today's IP
networking represents a synthesis of several developments that began to
evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area
Networks), which emerged in the mid- to late-1980s, together with the
advent of the World Wide Web in early 1990s.

The Internet Protocol Suite, like many protocol suites, may be viewed
as a set of layers. Each layer solves a set of problems involving the
transmission of data, and provides a well-defined service to the upper layer
protocols based on using services from some lower layers. Upper layers are
logically closer to the user and deal with more abstract data, relying on
lower layer protocols to translate data into forms that can eventually be
physically transmitted. The TCP/IP model consists of four layers (RFC

134
1122). From lowest to highest, these are the Link Layer, the Internet Layer,
the Transport Layer, and the Application Layer.

The Internet Protocol Suite

Application Layer

BGP · DHCP · DNS · FTP · GTP · HTTP · IMAP · IRC ·


NNTP · NTP · POP · RIP · RPC · RTP · RTSP · SDP · SIP ·
SMTP · SNMP · SOAP · SSH · STUN · Telnet · TLS/SSL ·
XMPP · (more)

Transport Layer

TCP · UDP · DCCP · SCTP · RSVP · ECN · (more)

Internet Layer

IP (IPv4, IPv6) · ICMP · ICMPv6 · IGMP · IPsec · (more)

Link Layer

ARP · RARP · NDP · OSPF · Tunnels (L2TP) · Media


Access Control (Ethernet, DSL, ISDN, FDDI) · Device
Drivers · (more)

135
4.12.2 History of internet protocol

The Internet Protocol Suite resulted from work done by Defense


Advanced Research Projects Agency (DARPA) in the early 1970s. After
building the pioneering ARPANET in 1969, DARPA started work on a
number of other data transmission technologies. In 1972, Robert E. Kahn
was hired at the DARPA Information Processing Technology Office, where
he worked on both satellite packet networks and ground-based radio packet
networks, and recognized the value of being able to communicate across
them. In the spring of 1973, Vinton Cerf, the developer of the existing
ARPANET Network Control Program (NCP) protocol, joined Kahn to work
on open-architecture interconnection models with the goal of designing the
next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental
reformulation, where the differences between network protocols were hidden
by using a common internetwork protocol, and, instead of the network being
responsible for reliability, as in the ARPANET, the hosts became
responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of
the CYCLADES network, with important influences on this design. With the
role of the network reduced to the bare minimum, it became possible to join
almost any networks together, no matter what their characteristics were,
thereby solving Kahn's initial problem. One popular saying has it that
TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two
tin cans and a string." There is even an implementation designed to run using
homing pigeons, IP over Avian Carriers, documented in RFC 1149.

136
A computer called a router (a name changed from gateway to avoid
confusion with other types of gateways) is provided with an interface to each
network, and forwards packets back and forth between them. Requirements
for routers are defined in (Request for Comments 1812). The idea was
worked out in more detailed form by Cerf's networking research group at
Stanford in the 1973–74 period, resulting in the first TCP specification
(Request for Comments 675) (The early networking work at Xerox PARC,
which produced the PARC Universal Packet protocol suite, much of which
existed around the same period of time (i.e. contemporaneous), was also a
significant technical influence; people moved between the two). DARPA
then contracted with BBN Technologies, Stanford University, and the
University College London to develop operational versions of the protocol
on different hardware platforms. Four versions were developed: TCP v1,
TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then
stability with TCP/IP v4 — the standard protocol still in use on the Internet
today.

In 1975, a two-network TCP/IP communications test was performed


between Stanford and University College London (UCL). In November,
1977, a three-network TCP/IP test was conducted between sites in the US,
UK, and Norway. Several other TCP/IP prototypes were developed at
multiple research centers between 1978 and 1983. The migration of the
ARPANET to TCP/IP was officially completed on January 1, 1983 when the
new protocols were permanently activated. In March 1982, the US
Department of Defense declared TCP/IP as the standard for all military
computer networking. In 1985, the Internet Architecture Board held a three
day workshop on TCP/IP for the computer industry, attended by 250 vendor

137
representatives, promoting the protocol and leading to its increasing
commercial use. Kahn and Cerf were honored with the Presidential Medal of
Freedom on November 9, 2005 for their contribution to American culture.

4.12.3 Layers in the Internet Protocol Suite

The concept of layers

The TCP/IP suite uses encapsulation to provide abstraction of


protocols and services. Such encapsulation usually is aligned with the
division of the protocol suite into layers of general functionality. In general,
an application (the highest level of the model) uses a set of protocols to send
its data down the layers, being further encapsulated at each level.

This may be illustrated by an example network scenario, in which two


Internet host computers communicate across local network boundaries
constituted by their internetworking gateways (routers).

Encapsulation of application data descending


through the protocol stack.

TCP/IP stack operating on two hosts

138
connected via two routers and the
corresponding layers used at each hop

The functional groups of protocols and methods are the Application Layer,
the Transport Layer, the Internet Layer, and the Link Layer (RFC 1122). It
should be noted that this model was not intended to be a rigid reference
model into which new protocols have to fit in order to be accepted as a
standard.

The following table provides some examples of the protocols grouped


in their respective layers.

DNS, TFTP, TLS/SSL, FTP, Gopher, HTTP, IMAP, IRC,


NNTP, POP3, SIP, SMTP,SMPP, SNMP, SSH, Telnet, Echo,
RTP, PNRP, rlogin, ENRP
Application

Routing protocols like BGP and RIP which run over TCP/UDP,
may also be considered part of the Internet Layer.

Transport TCP, UDP, DCCP, SCTP, IL, RUDP, RSVP

IP (IPv4, IPv6) ICMP, IGMP, and ICMPv6

Internet OSPF for IPv4 was initially considered IP layer protocol since it
runs per IP-subnet, but has been placed on the Link since RFC
2740.

Link ARP, RARP, OSPF (IPv4/IPv6), IS-IS, NDP

Layer names and number of layers in the literature

139
The following table shows the layer names and the number of layers
in the TCP/IP model as presented in widespread university course textbooks
on computer networking used today.

Comer KuroseRFC Cisco


Forouzan Stallings Tanenbaum
Kozierok 1122 Academy

Five layers Five layers Five layers Four layers Four layers Four layers

L5 Application Application Application Application Application Application

Host-to-host
L4 Transport Transport Transport Transport Transport
or transport

L3 Network Internet Internet Internet Internet Internetwork

Data link
Network
L2 Data link (Network
access Host-to- Network
interface) Link
network interface

L1 Physical (Hardware) Physical

These textbooks are secondary sources that may contravene the intent of
RFC 1122 and other IETF primary sources.

Different authors have interpreted the RFCs differently regarding


whether the Link Layer (and the four-layer TCP/IP model) covers physical
layer issues or a "hardware layer" is assumed below the link layer. Some
authors have tried to use other names for the link layer, such as Network
interface layer, in effort to avoid confusion with the Data link layer of the

140
seven-layer OSI model. Others have attempted to map the Internet Protocol
model onto the seven-layer OSI Model. The mapping often results in a five-
layer TCP/IP model, wherein the Link Layer is split into a Data Link Layer
on top of a Physical Layer. Especially in literature with a bottom-up
approach to computer networking, where physical layer issues are
emphasized, an evolution towards a five-layer Internet model can be
observed out of pedagogical reasons.

The Internet Layer is usually directly mapped to the OSI's Network


Layer. At the top of the hierarchy, the Transport Layer is always mapped
directly into OSI Layer 4 of the same name. OSIs Application Layer,
Presentation Layer, and Session Layer are collapsed into TCP/IP's
Application Layer. As a result, these efforts result in either a four- or five-
layer scheme with a variety of layer names. This has caused considerable
confusion in the application of these models. Other authors dispense with
rigid pedagogy focusing instead on functionality and behavior.

The Internet protocol stack has never been altered by the Internet
Engineering Task Force (IETF) from the four layers defined in RFC 1122.
The IETF makes no effort to follow the seven-layer OSI model and does not
refer to it in standards-track protocol specifications and other architectural
documents. The IETF has repeatedly stated that Internet protocol and
architecture development is not intended to be OSI-compliant. RFC 3439,
addressing Internet architecture, contains a section entitled: "Layering
Considered Harmful".

141
4.12.4 Implementations

Today, most operating systems include and install a TCP/IP stack by


default. For most users, there is no need to look for implementations. TCP/IP
is included in all commercial Unix systems, Mac OS X, and all free-software
Unix-like systems such as Linux distributions and BSD systems, as well as
all Microsoft Windows operating systems. Unique implementations include
Lightweight TCP/IP, an open source stack designed for embedded systems
and KA9Q NOS, a stack and associated protocols for amateur packet radio
systems and personal computers connected via serial lines.

142
Chapter Four
Internet & Internet Protocols
End Chapter Quizzes

1. Database management and electronic mail softwares would be


found in which layer of the OSI model?
A The Application layer
B The Presentation layer
C The Data link layer
D The Network layer

2. The X.25 set of standards covers how many layers of the OSI
model?
A One
B Three
C Two
D Four

3. Which OSI model layer concerns itself with hardware


specifications?
A Data link
B Physical
C Network
D Presentation

143
4. A protocol is really
A A set of demands
B A set of rules
C A translation book for diplomats
D A call with very high authorization

5. Which enables a computer to work with a printer?


A Drivers
B Packet processors
C HCL
D Protocols

6. Which protocol is a network layer protocol?


A IPX
B FTP
C Telnet
D None of the above

7. Which layer of the OSI model does data compression?


A Network
B Physical
C Data link
D Presentation

8. Satellite communications use


A Medium frequency band
B Optical frequencies

144
C Microwaves
D None of the above

9. The basic (minimum) data rate in the digital telephony is


A 4 kb/s
B 8 kb/s
C 64 kb/s
D None of the above

10. A PABX can be used for


A Voice connections only
B Data connections only
C Both a and b
D None of the above

145
CHAPTER FIVE

MULTIMEDIA

5.1 Introduction
Multimedia is simply multiple forms of media integrated together.
Media can be text, graphics, audio, animation, video, data, etc. An example
of multimedia is a web page on the topic of Mozart that has text regarding
the composer along with an audio file of some of his music and can even
include a video of his music being played in a hall. Besides multiple types of
media being integrated with one another, multimedia can also stand for
interactive types of media such as video games CD ROMs that teach a
foreign language, or an information Kiosk at a subway terminal. Other terms
that are sometimes used for multimedia include hypermedia and rich media.

The term Multimedia is said to date back to 1965 and was used to
describe a show by the Exploding Plastic Inevitable. The show included a
performance that integrated music, cinema, special lighting and human
performance. Today, the word multimedia is used quite frequently, from
DVD's to CD ROMs to even a magazine that includes text and pictures.

So multimedia is a term that was coined by the advertising industry to


mean buying ads on TV, radio, outdoor and print media. It was originally
picked up by the PC industry to mean a computer that could display text in
16 colors and had a sound card. The term was a joke when you compared the
PC to the Apple Macintosh which was truly a multimedia machine that
could show color movies with sound and lifelike still images.

146
When Windows reached about version 3, and Intel was making the
386, the SoundBlaster equipped PC was beginning to approach the Mac in
sound capabilities it but still had a long way to go as far as video. The
Pentium processor, VGA graphics and Windows 95 nearly closed the gap
with the Mac and today's with fast Pentiums, new high definition monitors
and blazing fast video cards the PC has caught up with the Mac and
outperforms television. There are a number of terrific software packages that
allow you to create multimedia presentations on your computer. Perhaps the
best and most widely known is Microsoft's PowerPoint. With PowerPoint a
user can mix text with pictures, sound and movies to produce a multimedia
slideshow that's great for boardroom presentations or a computer kiosk but
difficult to distribute.

Eventually, in the not too distant future, the digital movie imbedded in
web pages will become the presentation delivery system of choice relegating
PowerPoint to the dustbins of software. If you have ever browsed a DVD
movie disk on your computer you've seen that future.

The basic elements of multimedia on a computer are:

Text
Still images
Sound
Movies
Animations
Special Effects

147
Text, still images and the video portion of movies are functions of your
monitor, your video card and the software driver that tells Windows how
your video card works. Your monitor is essentially a grid of closely spaced
little luminous points called pixels which can be turned on and off like tiny
light bulbs. For the sake of simplicity we'll extend our above example to say
that the little bulbs can be lighted with a number of colors. Just how close
together those points of light are packed is a function of your monitor. The
number of colors that the luminescent points can display is a function of the
monitor in concert with the video card. (If you're wondering what a video
card is, follow the cable from your monitor to your computer.)

5.2 History of multimedia

In 1965 the term Multi-media was used to describe the Exploding


Plastic Inevitable, a performance that combined live rock music, cinema,
experimental lighting and performance art.

In the intervening forty years the word has taken on different


meanings. In the late 1970s the term was used to describe presentations
consisting of multi-projector slide shows timed to an audio track. In the
1990s it took on its current meaning. In common usage the term multimedia
refers to an electronically delivered combination of media including video,
still images, audio, text in such a way that can be accessed interactively.
Much of the content on the web today falls within this definition as
understood by millions. Some computers which were marketed in the 1990s
were called "multimedia" computers because they incorporated a CD-ROM
drive, which allowed for the delivery of several hundred megabytes of video,
picture, and audio data.

148
5.3 Major characteristics of multimedia

Multimedia presentations may be viewed in person on stage,


projected, transmitted, or played locally with a media player. A broadcast
may be a live or recorded multimedia presentation. Broadcasts and
recordings can be either analog or digital electronic media technology.
Digital online multimedia may be downloaded or streamed. Streaming
multimedia may be live or on-demand.

Multimedia games and simulations may be used in a physical


environment with special effects, with multiple users in an online network,
or locally with an offline computer, game system, or simulator. The various
formats of technological or digital multimedia may be intended to enhance
the users' experience, for example to make it easier and faster to convey
information. Or in entertainment or art, to transcend everyday experience.

A lasershow is a live multimedia performance.

Enhanced levels of interactivity are made possible by combining multiple


forms of media content. Online multimedia is increasingly becoming object-
oriented and data-driven, enabling applications with collaborative end-user
innovation and personalization on multiple forms of content over time.
Examples of these range from multiple forms of content on Web sites like
photo galleries with both images (pictures) and title (text) user-updated, to

149
simulations whose co-efficients, events, illustrations, animations or videos
are modifiable, allowing the multimedia "experience" to be altered without
reprogramming. In addition to seeing and hearing, Haptic technology
enables virtual objects to be felt. Emerging technology involving illusions of
taste and smell may also enhance the multimedia experience.

5.4 Multimedia Applications

Multimedia has become a huge force in American culture, industry


and education. Practically any type of information we receive can be
categorized as multimedia, from television, to magazines, to web pages, to
movies, multimedia is a tremendous force in both informing the American
public and entertaining us. Advertising is perhaps one of the biggest
industry's that use multimedia to send their message to the masses. Where
one type of media, let's say radio or text can be a great way to promote an
item, using multimedia techniques can significantly make an item being
advertised better received by the masses and in many cases with greater
results.

Multimedia in Education has been extremely effective in teaching


individuals a wide range of subjects. The human brain learns using many
senses such as sight and hearing. While a lecture can be extremely
informative, a lecture that integrates pictures or video images can help an
individual learn and retain information much more effectively. Using
interactive CD ROM's can be extremely effective in teaching students a
wide variety of disciplines, most notably foreign language and music.

150
5.5 Multimedia and the Future

As technology progresses, so will multimedia. Today, there are plenty


of new media technologies being used to create the complete multimedia
experience. For instance, virtual reality integrates the sense of touch with
video and audio media to immerse an individual into a virtual world. Other
media technologies being developed include the sense of smell that can be
transmitted via the Internet from one individual to another. Today's video
games include bio feedback. In this instance, a shock or vibration is given to
the game player when he or she crashes or gets killed in the game. In
addition as computers increase their power new ways of integrating media
will make the multimedia experience extremely intricate and exciting.

Multimedia is more than one concurrent presentation medium (for


example, on CD-ROM or a Web site). Although still images are a different
medium than text, multimedia is typically used to mean the combination of
text, sound, and/or motion video. Some people might say that the addition of
animated images (for example, animated GIF on the Web) produces
multimedia, but it has typically meant one of the following:

Text and sound


Text, sound, and still or animated graphic images
Text, sound, and video images
Video and sound
Multiple display areas, images, or presentations presented
concurrently
In live situations, the use of a speaker or actors and "props"
together with sound, images, and motion video

151
Multimedia can arguably be distinguished from traditional motion
pictures or movies both by the scale of the production (multimedia is usually
smaller and less expensive) and by the possibility of audience interactivity or
involvement (in which case, it is usually called interactive multimedia).
Interactive elements can include: voice command, mouse manipulation, text
entry, touch screen, video capture of the user, or live participation (in live
presentations).

Multimedia tends to imply sophistication (and relatively more


expense) in both production and presentation than simple text-and-images.
Multimedia presentations are possible in many contexts, including the Web,
CD-ROMs, and live theater. A rule-of-thumb for the minimum development
cost of a packaged multimedia production with video for commercial
presentation (as at trade shows) is: $1,000 a minute of presentation time.
Since any Web site can be viewed as a multimedia presentation, however,
any tool that helps develop a site in multimedia form can be classed as
multimedia software and the cost can be less than for standard video
productions. Multimedia is media and content that utilizes a combination of
different content forms. The term can be used as a noun (a medium with
multiple content forms) or as an adjective describing a medium as having
multiple content forms. The term is used in contrast to media which only
utilize traditional forms of printed or hand-produced material. Multimedia
includes a combination of text, audio, still images, animation, video, and
interactivity content forms.

Multimedia is usually recorded and played, displayed or accessed by


information content processing devices, such as computerized and electronic
devices, but can also be part of a live performance. Multimedia (as an

152
adjective) also describes electronic media devices used to store and
experience multimedia content. Multimedia is similar to traditional mixed
media in fine art, but with a broader scope. The term "rich media" is
synonymous for interactive multimedia. Hypermedia can be considered one
particular multimedia application.

5.6 Categorization of multimedia

Multimedia may be broadly divided into linear and non-linear


categories. Linear active content progresses without any navigation control
for the viewer such as a cinema presentation. Non-linear content offers user
interactivity to control progress as used with a computer game or used in
self-paced computer based training. Hypermedia is an example of non-linear
content. Multimedia presentations can be live or recorded. A recorded
presentation may allow interactivity via a navigation system. A live
multimedia presentation may allow interactivity via an interaction with the
presenter or performer.

5.7 Uses of multimedia

Since media is the plural of medium, the term "multimedia" is a


pleonasm if "multi" is used to describe multiple occurrences of only one
form of media such as a collection of audio CDs. This is why it's important
that the word "multimedia" is used exclusively to describe multiple forms
of media and content.

The term "multimedia" is also ambiguous. Static content (such as a


paper book) may be considered multimedia if it contains both pictures and

153
text or may be considered interactive if the user interacts by turning pages at
will. Books may also be considered non-linear if the pages are accessed non-
sequentially. The term "video", if not used exclusively to describe motion
photography, is ambiguous in multimedia terminology. Video is often used
to describe the file format, delivery format, or presentation format instead of
"footage" which is used to distinguish motion photography from
"animation" of rendered motion imagery. Multiple forms of information
content are often not considered multimedia if they don't contain modern
forms of presentation such as audio or video. Likewise, single forms of
information content with single methods of information processing (e.g.
non-interactive audio) are often called multimedia, perhaps to distinguish
static media from active media. Performing arts may also be considered
multimedia considering that performers and props are multiple forms of both
content and media.

Ghost Recon Advanced Warfighter 2. Video games may include a


combination of text, audio, still images, animation, video, and interactivity
content forms.

154
A presentation using Powerpoint. Corporate presentations may combine all
forms of media content.

Virtual reality uses multimedia content. Applications and delivery platforms


of multimedia are virtually limitless.

VVO Multimedia-Terminal in Dresden WTC (Germany)

155
Multimedia finds its application in various areas including, but not
limited to, advertisements, art, education, entertainment, engineering,
medicine, mathematics, business, scientific research and spatial temporal
applications. Several examples are as follows:

5.7.1 Creative industries

Creative industries use multimedia for a variety of purposes ranging


from fine arts, to entertainment, to commercial art, to journalism, to media
and software services provided for any of the industries listed below. An
individual multimedia designer may cover the spectrum throughout their
career. Request for their skills range from technical, to analytical, to
creative.

5.7.2 Commercial

Much of the electronic old and new media utilized by commercial


artists is multimedia. Exciting presentations are used to grab and keep
attention in advertising. Business to business, and interoffice
communications are often developed by creative services firms for advanced
multimedia presentations beyond simple slide shows to sell ideas or liven-up
training. Commercial multimedia developers may be hired to design for
governmental services and nonprofit services applications as well.

5.7.3 Entertainment and fine arts

In addition, multimedia is heavily used in the entertainment industry,


especially to develop special effects in movies and animations. Multimedia
games are a popular pastime and are software programs available either as

156
CD-ROMs or online. Some video games also use multimedia features.
Multimedia applications that allow users to actively participate instead of
just sitting by as passive recipients of information are called Interactive
Multimedia. In the Arts there are multimedia artists, whose minds are able to
blend techniques using different media that in some way incorporates
interaction with the viewer. One of the most relevant could be Peter
Greenaway who is melding Cinema with Opera and all sorts of digital
media. Another approach entails the creation of multimedia that can be
displayed in a traditional fine arts arena, such as an art gallery. Although
multimedia display material may be volatile, the survivability of the content
is as strong as any traditional media. Digital recording material may be just
as durable and infinitely reproducible with perfect copies every time.

5.7.4 Education

In Education, multimedia is used to produce computer-based training


courses (popularly called CBTs) and reference books like encyclopedia and
almanacs. A CBT lets the user go through a series of presentations, text
about a particular topic, and associated illustrations in various information
formats. Edutainment is an informal term used to describe combining
education with entertainment, especially multimedia entertainment. Learning
theory in the past decade has expanded dramatically because of the
introduction of multimedia. Several lines of research have evolved (e.g.
Cognitive load, Multimedia learning, and the list goes on). The possibilities
for learning and instruction are nearly endless.

157
5.7.5 Engineering

Software engineers may use multimedia in Computer Simulations for


anything from entertainment to training such as military or industrial
training. Multimedia for software interfaces are often done as a collaboration
between creative professionals and software engineers.

5.7.6 Industry

In the Industrial sector, multimedia is used as a way to help present


information to shareholders, superiors and coworkers. Multimedia is also
helpful for providing employee training, advertising and selling products all
over the world via virtually unlimited web-based technologies.

5.7.7 Mathematical and scientific research

In mathematical and scientific research, multimedia are mainly used


for modelling and simulation. For example, a scientist can look at a
molecular model of a particular substance and manipulate it to arrive at a
new substance. Representative research can be found in journals such as the
Journal of Multimedia.

5.7.8 Medicine

In Medicine, doctors can get trained by looking at a virtual surgery or


they can simulate how the human body is affected by diseases spread by
viruses and bacteria and then develop techniques to prevent it.

158
5.7.9 Miscellaneous

In Europe, the reference organization for Multimedia industry is the


European Multimedia Associations Convention (EMMAC). An observatory
for jobs in the multimedia industry provides surveys and analysis about
multimedia and ITC jobs.

5.8 Structuring information in a multimedia form

Multimedia represents the convergence of text, pictures, video and


sound into a single form. The power of multimedia and the Internet lies in
the way in which information is linked. Multimedia and the Internet require
a completely new approach to writing. The style of writing that is
appropriate for the 'on-line world' is highly optimized and designed to be
able to be quickly scanned by readers.

A good site must be made with a specific purpose in mind and a site
with good interactivity and new technology can also be useful for attracting
visitors. The site must be attractive and innovative in its design, function in
terms of its purpose, easy to navigate, frequently updated and fast to
download. When users view a page, they can only view one page at a time.
As a result, multimedia users must create a ‗mental model of information
structure‘.

Patrick Lynch, author of the Yale University Web Style Manual,


states that users need predictability and structure, with clear functional and
graphical continuity between the various components and subsections of the
multimedia production. In this way, the home page of any multimedia

159
production should always be a landmark, able to be accessed from anywhere
within a multimedia piece.

160
Chapter Five
Multimedia
End Chapter Quizzes

1. Multimedia on LANs requires all of the following except


A Compression chips
B Windows
C C-ROMs
D Very fast microprocessors

2. A ---- image is defined as a grid whose cells are filled with colours.
A Bitmap
B Vector
C Printed
D Interactive

3. Most ---- programs were limited to drawing simple, geometric


outline with simple colours.
A Graphics
B Images
C Animations
D None

4. The bitmap file format supports upto ---- bit depth colour.
A 23
B 24

161
C 25
D 26

5. Extension for sound files is


A .Wav
B .Wan
C .Mid
D .Bmp

6. Capacity of a CD Rom is generally


A 400 MB
B 500 MB
C 650 MB
D 200 MB

7. If the program plays a sequence of sound, images and video, this is


called
A Media
B Multimedia
C Interactive multimedia
D None of the above

8. Bitmap images are made up of


A Pixels
B Tiny dots
C Both a and b
D None

162
9. ---- is a technique in which you define the start image and the end
image
A Morphing
B Tweening
C Multimedia
D None of the above

10.The term multimedia was first introduced in the year


A 1960
B 1970
C 1965
D 1945

163
Answer key to End Chapter Quizzes

Chapter One
1) b 2) b 3) b 4) b 5) c 6) a 7) c 8) a 9) b 10) c

Chapter Two
1) a 2) b 3) c 4) b 5) c 6) b 7) c 8) b 9) a 10) a

Chapter Three
1) b 2) b 3) c 4) b 5) b 6) c 7) b 8) a 9) d 10) a

Chapter Four
1) a 2) b 3) b 4) b 5) a 6) a 7) d 8) c 9) c 10) c

Chapter Five
1) d 2) a 3) a 4) b 5) a 6) c 7) b 8) c 9) a 10) c

164
BIBLIOGRAPHY

(I) Books :

1. Verma, Anant : Working with Word


2. Sagman, Steve : Microsoft Office
3. Ghosh, L. K. : Introductory Multimedia
4. Jain, M. : Data Communication and Networking
5. Jain, V. K. : Computer Networks &
Communication
6. Bajaj, Naveena : Computer Networks &
Communication
7. Norton, Peter : Introduction to Computers
8. Gupta, S. P. : Information Technology
9. Leon Alex : Fundamentals of Computers

II) Journals, Periodicals, Newspapers and Other useful Publications

1. I. T. Journal Weekly.
2. Journal of Data Communication.

III) Reports and Other Materials

1. Journals in Information Technology and Computers .


2. Various reports relating to I. T. sector.

165
Suggested Books
Data Communication and Computer Networking

1. Data Communication and Networking -


M. Jain
2. Computer Networks & Communications -
Bajaj & Naveena
3. Introduction to Computers -
Robert D. Shepherd
4. Introduction to Computers - Peter Norton.

166

You might also like