You are on page 1of 43

UNIT-2

2.1 Network Applications and Protocols


 The language used by members of a network is called network communication
protocol
 Protocol is set rules and procedures
 A network protocol specifies the vocabulary and rules of data communication
 Different applications of different network may use same or different application
protocols.
 The hosts are communicate with these protocols and different protocols are
converted at gateway of a network

Application Layer:-
The application layer is present at the top of the OSI model. It is the layer through which
users interact. It provides services to the user.
Application Layer protocol:-
1. TELNET:
Telnet stands for the TELecomunications NETwork. It helps in terminal emulation. It allows
Telnet client to access the resources of the Telnet server. It is used for managing the files on
the internet. It is used for initial set up of devices like switches. The telnet command is a
command that uses the Telnet protocol to communicate with a remote device or system.
Port number of telnet is 23.
2. FTP:
FTP stands for file transfer protocol. It is the protocol that actually lets us transfer files.It
can facilitate this between any two machines using it. But FTP is not just a protocol but it is
also a program.FTP promotes sharing of files via remote computers with reliable and
efficient data transfer. Port number for FTP is 20 for data and 21 for control.

3. TFTP:
The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP, but
it’s the protocol of choice if you know exactly what you want and where to find it. It’s a
technology for transferring files between network devices and is a simplified version of
FTP

4. NFS:
It stands for network file system.It allows remote hosts to mount file systems over a
network and interact with those file systems as though they are mounted locally. This
enables system administrators to consolidate resources onto centralized servers on the
network.

5. SMTP:
It stands for Simple Mail Transfer Protocol. It is a part of the TCP/IP protocol. Using a
process called “store and forward,” SMTP moves your email on and across networks. It
works closely with something called the Mail Transfer Agent (MTA) to send your
communication to the right computer and email inbox. Port number for SMTP is 25.

6. LPD:
It stands for Line Printer Daemon.It is designed for printer sharing.It is the part that
receives and processes the request. A “daemon” is a server or agent.

8. SNMP:
It stands for Simple Network Management Protocol. It gathers data by polling the devices
on
the network from a management station at fixed or random intervals, requiring
them to disclose certain information. It is a way that servers can share information about
their current state, and also a channel through which an administrate can modify pre-
defined values. Port number of SNMP is 161(TCP) and 162(UDP).

9. DNS:
It stands for Domain Name Service. Every time you use a domain name, therefore, a DNS
service must translate the name into the corresponding IP address. For example, the
domain name www.abc.com might translate to 198.105.232.4.
Port number for DNS is 53.

10. DHCP:
It stands for Dynamic Host Configuration Protocol (DHCP).It gives IP addresses to
hosts.There is a lot of information a DHCP server can provide to a host when the host is
registering for an IP address with the DHCP server. Port number for DHCP is 67, 68

2.1.1. Computer communication systems

1. Centralized systems
2 Decentralized systems
3. Distributed systems

Centralized systems
Centralized systems are systems that use client/server architecture where one or more
client nodes are directly connected to a central server. This is the most commonly used
type of system in many organisations where client sends a request to a company server and
receives the response.
Example –
Wikipedia. Consider a massive server to which we send our
requests and the server responds with the article that we requested. Suppose
we enter the search term ‘junk food’ in the Wikipedia search
bar. This search term is sent as a request to the Wikipedia
servers (mostly located in Virginia, U.S.A) which then responds back with the
articles based on relevance. In this situation, we are the client node,
wikipedia servers are central server.
Characteristics of Centralized System –
P r e s e n c e o f a g
clock: As the entire system consists of a central
node(a server/ a master) and many client nodes(a
computer/ a slave), all client nodes sync up with the global clock(the clock of
the central node).
 One single central unit: One single central unit
which serves/coordinates all the other nodes in the system.
 Dependent failure of components: Central node failure causes entire system to fail.
This makes sense because when the server is down, no other entity is there to
send/receive response/requests.

Architecture of Centralized System –

Client-Server architecture. The central node that serves the other nodes in the system is the
server node and all the other nodes are the client nodes.

Limitations of Centralized System –


 Can’t scale up vertically after a certain limit – After a limit, even if you increase the
hardware and software capabilities of the server node, the performance will not
increase appreciably leading to a cost/benefit ratio < 1.
 Bottlenecks can appear when the traffic spikes – as the server can only have a finite
number of open ports to which can listen to connections from client nodes. So, when
high traffic occurs like a shopping sale, the server can essentially suffer a Denial-of-
Service attack or Distributed Denial-of-Service attack.

Advantages of Centralized System –


 Easy to physically secure. It is easy to secure and service the server and client nodes
by virtue of their location
 Smooth and elegant personal experience – A client has a dedicated system which he
uses(for example, a personal computer) and the company has a similar system which
can be modified to suit custom needs
 Dedicated resources (memory, CPU cores, etc)
 More cost efficient for small systems upto a certain limit – As the central systems
take less funds to set up, they have an edge when small systems have to be built
 Quick updates are possible – Only one machine to update.
 Easy detachment of a node from the system. Just remove the connection of the client
node from the server and voila! Node detached.

Disadvantages of Centralized System –


 Highly dependent on the network connectivity – System can fail if the nodes lose
connectivity as there is only one central node.
 No graceful degradation of system – abrupt failure of the entire system
 Less possibility of data backup. If the server node fails and there is no backup, you
lose the data straight away
 Difficult server maintenance – There is only one server node and due to availability
reasons, it is inefficient and unprofessional to take the server down for maintenance.
So, updates have to be done on-the-fly(hot updates) which is difficult and the system
could break.

Applications of Centralized System –


 Application development – Very easy to setup a central server and send client
requests. Modern technology these days do come with default test servers which can
be launched with a couple commands. For example, express server, django server.
 Data analysis – Easy to do data analysis when all the data is in one place and
available for analysis
 Personal computing
Use Cases –
 Centralized databases – all the data in one server for use.
 Single player games like Need For Speed, GTA Vice City – entire game in one
system(commonly, a Personal Computer)
 Application development by deploying test servers leading to easy debugging, easy
deployment, easy simulation
 Personal Computers

Organisations Using –
National Informatics Center (India), IBM

2. DECENTRALIZED SYSTEMS:
In decentralized systems, every node makes its own decision. The final behavior of the
system is the aggregate of the decisions of the individual nodes. Note that there is no single
entity that receives and responds to the request.
Exa mple –
Bitcoin. Lets take bitcoin for example because its the most
popular use case of decentralized systems. No single
entity/organisation owns the bitcoin network. The network is a
sum of all the nodes who talk to each other for maintaining
the amount of bitcoin every account holder has.
Characteristics of Decentralized System –
 Lack of a global clock: Every node is independent of each other
and hence, have different clocks that they run and follow.
 Multiple central units (Computers/Nodes/Servers): More than
one central unit which can listen for connections from other nodes
 Dependent failure of components: one central node failure causes a part of system
to fail; not the whole system

Architecture of Decentralized System –


 peer-to-peer architecture – all nodes are peers of each other. No one node has
supremacy over other nodes
 master-slave architecture – One node can become a master by voting and help in
coordinating of a part of the system but this does not mean the node has supremacy
over the other node which it is coordinating
Limitations of Decentralized System –
 May lead to problem of coordination at the enterprise level – When every node is
owner of its own behavior, its difficult to achieve collective tasks
 Not suitable for small systems – Not beneficial to build and operate small
decentralized systems because of low cost/benefit ratio
 No way to regulate a node on the system – no superior node overseeing the
behavior of subordinate nodes

Advantages of Decentralized System –


 Minimal problem of performance bottlenecks occurring – The entire load gets
balanced on all the nodes; leading to minimal to no bottleneck situations
 High availability – Some nodes(computers, mobiles, servers) are always
available/online for work, leading to high availability
 More autonomy and control over resources – As each node controls its own
behavior, it has better autonomy leading to more control over resources

Disadvantages of Decentralized System –


 Difficult to achieve global big tasks – No chain of command to command others to
perform certain tasks
 No regulatory oversight
 Difficult to know which node failed – Each node must be pinged for availability
checking and partitioning of work has to be done to actually find out which node
failed by checking the expected output with what the node generated
 Difficult to know which node responded – When a request is served by a
decentralised system, the request is actually served by one of the nodes in the system
but it is actually difficult to find out which node indeed served the request.

Applications of Decentralized System –


 Private networks – peer nodes joined with each other to make a private network.
 Cryptocurrency – Nodes joined to become a part of a system in which digital
currency is exchanged without any trace and location of who sent what to whom.
However, in bitcoin we can see the public address and amount of bitcoin transferred,
but those public addresses are mutable and hence difficult to trace.

Use Cases –
 Blockchain
 Decentralized databases – Entire database split in parts and distributed to different
nodes for storage and use. For example, records with names starting from ‘A’ to ‘K’ in
one node, ‘L’ to ‘N’ in second node and ‘O’ to ‘Z’ in third node
 Cryptocurrency
Organisations Using – Bitcoin, Tor network

3. DISTRIBUTED SYSTEMS:
A distributed system allows resource sharing, including software by systems connected to
the network. Examples of distributed systems / applications of distributed computing :
Intranets, Internet, WWW, email.

Example –
Google search system. Each request is worked upon by hundreds of
computers which crawl the web and return the relevant results. To the
user, the Google appears to be one system, but it actually is multiple
computers working together to accomplish one single task (return the
results to the search query).
Characteristics of Distributed System – :
 Concurrency of components: Nodes apply consensus protocols
to agree on same values/transactions/commands/logs.
 Lack of a global clock: All nodes maintain their own clock.
 Independent failure of components: In a distributed system, nodes fail
independently without having a significant effect on the entire system. If one node
fails, the entire system sans the failed node continue to work.
Scaling –
Horizontal and vertical scaling is possible.
Components of Distributed System –

Components of Distributed System are,


 Node (Computer, Mobile, etc.)
 Communication link (Cables, Wi-Fi, etc.)

Architecture of Distributed System –


 peer-to-peer – all nodes are peer of each other and work towards a common goal
 client-server – some nodes are become server nodes for the role of coordinator,
arbiter, etc.
 n-tier architecture – different parts of an application are distributed in different
nodes of the systems and these nodes work together to function as an application for
the user/client

Limitations of Distributed System –


 Difficult to design and debug algorithms for the system. These algorithms are
difficult because of the absence of a common clock; so no temporal ordering of
commands/logs can take place. Nodes can have different latencies which have to be
kept in mind while designing such algorithms. The complexity increases with increase
in number of nodes. Visit this link for more information
 No common clock causes difficulty in the temporal ordering of events/transactions
 Difficult for a node to get the global view of the system and hence take informed
decisions based on the state of other nodes in the system

Advantages of Distributed System –


 Low latency than centralized system – Distributed systems have low latency
because of high geographical spread, hence leading to less time to get a response

Disadvantages of Distributed System –


 Difficult to achieve consensus
 Conventional way of logging events by absolute time they occur is not possible here

Applications of Distributed System –


 Cluster computing – a technique in which many computers are coupled together to
work so that they achieve global goals. The computer cluster acts as if they were a
single computer
 Grid computing – All the resources are pooled together for sharing in this kind of
computing turning the systems into a powerful supercomputer.

2.2 Networking models

1. Client/Server Model
2. Peer-to-Peer Model
3. Web based Model
4. Emerging File sharing Model: Servent

 Using a particular service. Similarly in the digital world a Client is a computer (Host)
i.e. capable of receiving information or using a particular service from the service
providers (Servers).
 Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its
present in the database.
A dvantages of Client-Server
model:
 Centralized system with
all data in a single place.
 Cost efficient requires
less maintenance cost
and Data recovery is
possible.
 The capacity of the
Client and Servers can
be changed separately.
Disadvantages of Client-
Server model:
 Clients are prone to viruses, Trojans and worms if present in the Server or uploaded
into the Server.
 Server are prone to Denial of Service (DOS) attacks.
 Data packets may be spoofed or modified during transmission.
 Phishing or capturing login credentials or other useful information of the user are
common and MITM(Man in the Middle) attacks are common.

2 PEER to PEER MODEL:

In Computer Networking, P2P is a file sharing technology, allowing the users to access
mainly the multimedia files like videos, music, e-books, games etc. The individual users in
this network are referred to as peers. The peers request for the files from other peers by
establishing TCP or UDP connections.
How P2P works (Overview)
A peer-to-peer network allows computer hardware and software to communicate without
the need for a server. Unlike client-server architecture, there is no central server for
processing requests in a P2P architecture. The peers directly interact with one another
without the requirement of a central server.
Now, when one peer makes a request, it is possible that multiple peers have the copy of
that requested object. Now the problem is how to get the IP addresses of all those peers.
This is decided by the underlying architecture supported by the P2P systems. By means of
one of these methods, the client peer can get to know about all the peers which have the
requested object/file and the file transfer takes place directly between these two peers.

3. Web-based Modeling

A Network-Centric M&S application is the one, the components of which run on different
server computers and communicate over a network (e.g., Internet, virtual private network,
wireless network) using the TCP/IP, HTTP, RMI or another protocol.

The server computers running the models or model components of a network-centric M&S
application can be geographically dispersed or be part of a local area network.
A Web-based M&S application is a network-centric M&S application, which uses the
HyperText Transfer Protocol (HTTP) for the communication among its components over a
network.

Users use client computers to access or interact with the M&S application running on
server computer(s).

Client-Server Architecture and Service-Oriented Architecture are popular ones for building
network-centric M&S applications.

Java Platform, Enterprise Edition (Java EE) and Microsoft .NET Framework are two
industry standard platforms for building network-centric M&S applications.

4. FILE SHARING MODEL:

File sharing is the act of sharing one or more files. These files exist on your computer and
can be sent over a computer network and shared with someone in the same house, a team
member at work, a friend in another country, or yourself so that you can access your files
from anywhere. Files can be shared over a local network in an office or at home, or you can
share files over the internet.
Types of File Sharing
There are two ways to share files over a network: directly between two computers and
between a computer and a server.
When a file is shared between a computer and a server, the computer uploads the file to a
storage area on the server where it can be shared with others. People that want access to
the file download it directly from that server.
When a file is shared between two computers over a network, the file is sent directly to the
other person. This is often called peer-to-peer (P2P) file sharing and works by
communicating directly with the other person's device, with no servers involved.

2.3 Communnication Service Methods and Data Transmission Modes

2.3.1 Communnication Service Methods


1. Serial

i) Synchronous

ii) Asynchronous

2. Parllel

Serial Communication
In Telecommunication and Computer Science, serial communication is the process of
sending/receiving data in one bit at a time. It is like you are firing bullets from a machine gun to
a target… that’s one bullet at a time! ;)

Parallel Communication

Parallel communication is the process of sending/receiving multiple data bits at a time through
parallel channels. It is like you are firing using a shotgun to a target – where multiple bullets are
fired from the same gun at a time! ;)
Serial vs Parallel Communication

Now lets have a quick look at the differences between the two types of communications.

Serial Communication Parallel Communication

1. One data bit is transceived at a time 1. Multiple data bits are transceived at a time

2. Slower 2. Faster

3. Less number of cables required to

transmit data 3. Higher number of cables required


Major Factors Limiting Parallel Communication

Before the development of high-speed serial technologies, the choice of parallel links over
serial links was driven by these factors:

Speed: Superficially, the speed of a parallel link is equal to bit rate*number of channels. In
practice, clock skew reduces the speed of every link to the slowest of all of the links.

Cable length: Crosstalk creates interference between the parallel lines, and the effect only
magnifies with the length of the communication link. This limits the length of the
communication cable that can be used.

These two are the major factors, which limit the use of parallel communication.

Advantages of Serial over Parallel

Although a serial link may seem inferior to a parallel one, since it can transmit less data per
clock cycle, it is often the case that serial links can be clocked considerably faster than
parallel links in order to achieve a higher data rate. A number of factors allow serial to be
clocked at a higher rate:

Clock skew between different channels is not an issue (for un-clocked asynchronous serial
communication links).

A serial connection requires fewer interconnecting cables (e.g. wires/fibers) and hence
occupies less space. The extra space allows for better isolation of the channel from its
surroundings.

Crosstalk is not a much significant issue, because there are fewer conductors in proximity.

In many cases, serial is a better option because it is cheaper to implement. Many ICs have
serial interfaces, as opposed to parallel ones, so that they have fewer pins and are therefore
less expensive. It is because of these factors, serial communication is preferred over
parallel communication.
How is Data sent Serially?

Since we already know what are registers and data bits, we would now be talking in these
terms only. If not, I would recommend you to first take a detour and go through the
introduction of this post by Mayank.

When a particular data set is in the microcontroller, it is in parallel form, and any bit can be
accessed irrespective of its bit number. When this data set is transferred into the output
buffer to be transmitted, it is still in parallel form. This output buffer converts this data into
Serial data (PISO) (Parallel In Serial Out), MSB (Most Significant Bit) first or LSB (Least
Significant Bit) first as according to the protocol. Now this data is transmitted in Serial
mode.

When this data is received by another microcontroller in its receiver buffer, the receiver
buffer converts it back into parallel data (SIPO) (Serial In Parallel Out) for further
processing. The following diagram should make it clear.

This is how serial communication works! But it is not as simple as it looks. There is a catch
in it, which we will discuss little later in the same post. For now, lets discuss about two
modes of serial data transfer – synchronous and asynchronous.
Serial Transmission Modes

Serial data can be transferred in two modes –

i) Asynchronous

ii) synchronous.

Asynchronous Data Transfer

Data Transfer is called Asynchronous when data bits are not “synchronized” with a clock
line, i.e. there is no clock line at all!

Lets take an analogy. Imagine you are playing a game with your friend where you have to
throw colored balls (let’s say we have only two colors – red (R) and yellow (Y)). Lets
assume you have unlimited number of balls. You have to throw a combination of these
colored balls to your friend. So you start throwing the balls. You throw R, then R, then Y,
then R again and so on. So you start your sequence RRYR… and then you end your round
and start another round. How will your buddy on the other side know that you have
finished sending him first round of balls and that you are already sending him the second
round of balls?? He/she will be completely lost! How nice it would be if you both sit
together and fix a protocol that each round consists of 8 balls! After every 8 balls, you will
throw two R balls to ensure that your friend has caught up with you, and then you again
start your second round of 8 balls. This is what we call asynchronous data transfer.

Asynchronous data transfer has a protocol, which is usually as follows:

The first bit is always the START bit (which signifies the start of communication on the
serial line), followed by DATA bits (usually 8-bits), followed by a STOP bit (which signals
the end of data packet). There may be a Parity bit just before the STOP bit. The Parity bit
was earlier used for error checking, but is seldom used these days.

The START bit is always low (0) while the STOP bit is always high (1).

The following diagram explains it.


Synchronous Data Transfer
Synchronous data transfer is when the data bits are “synchronized” with a clock pulse.
We will take the same analogy as before. You are still playing the throw-ball game, but this
time, you have set a timer in your watch such that it beeps every minute. You
will not throw a ball unless you hear a beep from your watch. As soon as you hear a beep
from your watch, you and your friend, both know that you are going to throw a ball to her.
Both of you can keep a track of time using this; say you start a new round after every 8
beeps. Isn’t it a much better approach? This approach is what we call synchronous data
transfer.
The concept for synchronous data transfer is simple, and as follows:
 The basic principle is that data bit sampling (or in other words, say, ‘recording’) is
done with respect to clock pluses, as you can see in the timing diagrams.
 Since data is sampled depending upon clock pulses, and since the clock sources are
very reliable, so there is much less error in synchronous as compared to asynchronous.
Transmission Modes in Computer Networks (Simplex, Half-Duplex and
Full-Duplex)
Transmission mode means transferring of data between two devices. It is also known as
communication mode. Buses and networks are designed to allow communication to occur
between individual devices that are interconnected. There are three types of transmission
mode:-
 Simplex Mode
 Half-Duplex Mode
 Full-Duplex Mode

SimplexMode
In Simplex mode, the communication is unidirectional, as on a one-way street. Only one of
the two devices on a link can transmit, the other can only receive. The simplex mode can
use the entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.

Half-Duplex Mode
In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa. The half-duplex
mode is used in cases where there is no need for communication in both direction at the
same time. The entire capacity of the channel can be utilized for each direction.
Example: Walkie- talkie in which message is sent one at a time and messages are sent in
both the directions.

Full-DuplexMode

In full-duplex mode, both stations can transmit and receive simultaneously. In full_duplex
mode, signals going in one direction share the capacity of the link with signals going in
other direction, this sharing can occur in two ways:
 Either the link must contain two physically separate transmission paths, one for sending
and other for receiving.
 Or the capacity is divided between signals travelling in both directions.
Full-duplex mode is used when communication in both direction is required all the time.
The capacity of the channel, however must be divided between the two directions.
Example: Telephone Network in which there is communication between two persons by a
telephone line, through which both can talk and listen at the same time.

2.4 ANALOG AND DIGITAL COMMUNICATIONS:

Communication Systems employing electrical signals to convey information from one place
to another over a pair of wires provided an early solution to the problem of fast and
accurate means of long distance communication. Today communication enter in our daily
lives in so many different ways that is easy to overlook the multitude of its facets. The
Mobile phones at our hands, the radio and the television which are the basic and necessary
part of our life are capable of providing rapid Communication from every corner of globe.
Here in this section, analog and digital Communication, which are the major mode of
communication are discussed with explanation of Analog Communication, Digital
Communication and its advantages, disadvantage in details.

In Analog Communication, the message or the information to be transmitted is analog in


nature. This analog message is obtained from the source such as speech, video, audio etc.
Message signal in this case are modulated at high carrier frequency inside the transmitter
in order to produce modulated signal. This modulated signal is then transmitted with the
help of  transmitting antenna to travel across the transmission channel.

Digital Communication
In digital communication, the message signal are transmitted in digital form that means
digital communication involves transmission of data or information in digital form.
Model of a Digital Communication:
The overall purpose of these systems are to message or sequence of symbols that are
coming out from the source to the destination point at a very high data rate and accuracy as
possible.  The source and destination points are physically separated in the space and a
communication channel is used to connect the source and the destination.
Model of a Digital Communication

Discrete Information: In case of digital modulation, the information source produces a


message signal which is not continuously time varying signals rather the message signal is
intermittent with respect to the time factor. The output of discrete information source such
as Teletype or the numeric output of a computer consisting a sequence of discrete symbols.
Source Encoder and Decoder: The symbol produced by information source are given to
source encoder. These symbols are for direct transmission. They need to first converted
into digital form by the Source Encoder. This Encoder assign code words to the symbol. The
receiver side these signal are encoded by the use of Source decoder to obtain the signal in
desired form.
Channel Encoder and Decoder: After converting the message signal into binary sequence by
the source encoder , the signal is ready to transmit through the channel. The
communication channel adds noise and interference to the signal being transmitted. To
avoid these type of error Channel encoding is done. Channel Encoder add some redundant
bits(binary) to the input signal.
Channel Decoder at the receiver is thus able to reconstruct error free accurate bits and thus
reduce the effect of channel noise and distortion at the receiver end.
Digital Modulator and Demodulator: Modulation and demodulation of digital signal are
done with the help of Digital Modulator and Demodulators respectively.
Communication Channel: The connection between the transmitter and the receiver is
established through a communication channel. The communication can take place through
wire lines, wireless or fibre optical channels. Other media such as Optical disks, magnetic
tapes and disks ets may also come under the category of Communication channel.
NOISE AND DISTORTION

 Noise is unwanted electrical signals that are introduced by circuit


components or natural disturbances.

 Distortion is an unwanted change in signal wave form as it travels through


the system.

 Both noise and distortion can cause communication errors.

 These errors result in extra bits, missing bits, or bits whose states have been
changed.

 Crosstalk is unwanted coupling between signal paths.

White Noise is the unavoidable noise background to all electronic processes.

Impulse Noise is caused by:

Lightning flashes during thunderstorms.

Voltage changes in adjacent lines or circuitry surrounding the communications line.

Intermittent electrical connections in any of the communications equipment through


which the signal passes, or other such causes.
Such noise can totally destroy many bits of Digital Data. Very little damage to Analog Data.

Delay Distortion:

Occurs when the method of transmission involves transmission at different frequencies.


The bits transmitted at one frequency may travel slightly faster than the bits transmitted at
another frequency.
Equalizer:
It is a piece of equipment that tries to compensate for both attenuation distortion and delay
distortion.

CHANNEL CAPACITY
The information-carrying capacity of a communications channel is directly proportional to its
bandwidth.

For example, a random stream of bits going across the voice bandwidth has a maximum
capacity of 33,600 bits per second (approx). This is demonstrated by using Shannon's Law.

Using Shannon’s Law: Signal to Noise Ratio: 38,000-39,000 db (best case)


Spectrum = 300-3400 Hz (range of frequencies available for use). Bandwidth = 3100

Hz (difference between highest and lowest frequencies). Channel Capacity = 33,600

bps (maximum transmission rate)*


According to Shannon's Law the maximum speed, at best case, is between 38,000-
39,000bd.

Narrowband (0-300Hz) channels - used for non-voice service; e.g., teletype writer +
other low speed data transmission.

Voiceband (300-3400Hz) channels - used for voice transmission, foreign exchange


service and data communications.

Wideband or Broadband - used for high speed data facsimile and video
transmission.

DATA TRANSMISSION CODES

The definition of that combination of bits used to represent each character is called
the Data Communications Code.
For example, there may be an 8 bit data communication code in which the letter A is
represented by 10000011 and the number 9 as 01110011.
There are many different codes.
All the codes are based on the use of bits to represent characters. The
number of different characters that can be represented by the code
depends on the number of bits required to form a character.

Two frequently used codes are ASCII and EBCDIC,


BAUDOT and BCD are two others.
ASCII stands for American Standard Code forinformation interchange.
EBCDIC stand for Extended Binary Coded Decimal Interchange Code.

2.6 Switching and Multiplexing

Switching and multiplexing are both techniques we use to make communication more
economical and scalable.

Switching
Fully connected networks don't scale well, but you still need to let any possible pair of
nodes communicate. Switching is the idea that you can dynamically configure a network
which is less than fully connected in order to join any two nodes for communication.

Two ways to switch: you can establish your own dedicated path (circuit switching) or you
can take whatever path is available at the time you send data (packet switching).
Circuit switching

Circuit switching maintains the idea of dedicated connections between two end points, but
allows for sharing of channels within the network, and hence is much more scaleable.
Circuit switching takes advantage of the fact that while everybody needs to be able to talk
to everybody else, they aren't likely to all do so at the same time.

The telephone network is based on circuit switching. To make a phone call you ask the
PSTN to establish a dedicated circuit for you. It does this by finding unused channels all
along the way through the network and dedicates them to your call. When you actually
start exchanging data (talking) all of your data follows the same path or circuit through the
network. If you pause in your conversation the circuit you're using is idle, wasting
bandwidth. But you never lose data because you have a guaranteed, reserved circuit, so it is
impossible for the system to be too busy to handle your data.

Pros

Guarantee of performance through dedication of resources.

Messages all follow same route, preserving ordering and (perhaps) inter message timing.
No buffering in the switches (data flows continuously).

Cons
 Wasted resources due to idle, dedicated resources.
 Potentially large set-up time (round trip time) could be unacceptable for short
message exchange.
 If part of the circuit (link or switch) fails your connection is lost.
 Poorly matched to traffic since it reserves a fixed amount of capacity.
 Circuit routing fixed so can't adapt to changes in network.
 Usually assumes symmetrical traffic flows for full-duplex channels.
 Alternatives to circuit switching are message switching (store and forward) and
packet switching.

Message Switching

Message switching is the way (at the high level, at least) that USENET articles are
distributed. Each article is sent in its entirety from one news host to another before the
next article is sent. During the time that an article is in transit no other article can be sent.
The potentially large size of messages means that the routers can be busy for long periods
handling one message, and the messages take a lot of storage.

Packet Switching

Packet switching alleviates the problems of circuit and message switching. A small upper
bound is put on the maximum size of a packet sent through the network. No reserved
channel is created ahead of time. Each packet belonging to a single message may take a
different route through the network.

Since there is no reservation of capacity it is possible that congestion becomes a problem.


Too many packets trying to get through the same router requires that the router be able to
buffer packets. All buffers have finite size, so it is possible that packets are dropped.

Pros
No set-up time.
No assumption of symmetrical traffic flow for full-duplex.
No wasted resources due to dedicated, idle resources.
Good match to bursty traffic.
Potenial for more robust, adaptive behavior in the event of a down link or router.
Routing algorithm can adapt route per packet based on network loading.
Cons

Congestion is possible in routers - lost packets.


Packets may be re-ordered by network.
Time spent in each router deciding which path to take.
Each packet has to carry more information with it so it can be routed.
Router must buffer as packets are sent store-and-forward style.

Multiplexing

Long distance links use high capacity point-to-point connections with a single physical
medium. Some means must be found of sharing the capacity to enable multiple
simultaneous uses of the medium. The motivation is the same here as for time shared
operating systems (one expensive resource shared by many people). Think of the links
between the regional phone switches.

What is the relationship between switching and multiplexing?

Multiplexing divides a fat pipe into independently useable portions. Switching could be
done without multiplexing, but since the point of switching is to share a network with
fewer links than the fully-connected network, it is quite likely these few links will be fat and
multiplexing will be used.

There are three ways to divide (and hence share) a channel: time, frequency, code.

Frequency Division Multiplexing

Possible when a single source requires less than the total available bandwidth in the
medium.
Frequency modulation lets data be moved to any particular part of the frequency spectrum
of the channel. Multiple sources can thus share the same medium, by using different parts
of the channel simultaneously.

FDM allows for full duplex modems, radio, and TV.

Carrier frequency is what shifts the spectrum of a channel

Individual TV signal also an example of FDM

 one side-band of the 4MHz signal


 separate color carrier and audio carrier
 total is 6MHz

Example: telephone system trunks


Remember how thick the 2100 TP cable was? Imagine that crossing the country
(where many more than 2100 simultaneous conversations are possible).

Microwave radio and co-axial cable are used (since they have greater bandwidth)

Multiplexing must obviously be done.

12 voice channels (4KHz each) form a Group (48KHz at 60kHz)


5 Groups (60 voice channels) form a Supergroup (240KHz at 312kHz)
10 Supergroups (600 voice channels) form a Mastergroup (2.52MHz at 564kHz)
...
(10,800 voice channels) form a jumbogroup multiplex

Many levels of modulation/demodulation can result in distortion.

Time Division Multiplexing

If the data rate of a medium is larger than the data rate (bps) of a source channel, then
multiple channels can be sent by allotting different channels different slices of time.
Multitasking in an OS is TDM of the CPU.

The multiple channels can be interleaved by bit, byte, or frame.

Synchronous TDM refers to the fact that a time slot is reserved for each source

 this allows for a guaranteed portion of the data rate to go to each channel
 (important for sending things like voice and video)
 has the potential to waste data capacity (if the source doesn't need its slot)
 sources need not all be the same speed
The issues of link control (flow, error control) are handled per channel, independent of the
TDM
Example: TDM Carrier System

Long haul TDM transmission is based on PCM for voice DS-1 (most usually called
T1)
4kHz => 8000 samples/second required
8 bits/channel * 24 channels 1 framing bit = 193 bits
8000 samples/second * 193 bits = 1.544 Mbps

Digital Service hierarchy 

standard no. voice channels data rate (bps) carrier

DS-0 1 64k DS-0

DS-1 24 1.544 M T1

DS-2 96 6.312M T2

DS-3 672 44.736M T3

Example: Cogent MANs


US-wide switched LAN network. Two regional rings (fiber, 1 Tbps), one for east and
one for west coast, interconnect city-wide MANs.

Multiplexing is a technique by which different analog and digital streams of transmission


can be simultaneously processed over a shared link. Multiplexing divides the high capacity
medium into low capacity logical medium which is then shared by different streams.
Communication is possible over the air (radio frequency), using a physical media (cable),
and light (optical fiber). All mediums are capable of multiplexing.
When multiple senders try to send over a single medium, a device called Multiplexer
divides the physical channel and allocates one to each. On the other end of communication,
a De-multiplexer receives data from a single medium, identifies each, and sends to
different receivers.

Frequency Division Multiplexing


When the carrier is frequency, FDM is used. FDM is an analog technology. FDM divides the
spectrum or carrier bandwidth in logical channels and allocates one user to each channel.
Each user can use the channel frequency independently and has exclusive access of it. All
channels are divided in such a way that they do not overlap with each other. Channels are
separated by guard bands. Guard band is a frequency which is not used by either channel.
Time Division Multiplexing
TDM is applied primarily on digital signals but can be applied on analog signals as well. In
TDM the shared channel is divided among its user by means of time slot. Each user can
transmit data within the provided time slot only. Digital signals are divided in frames,
equivalent to time slot i.e. frame of an optimal size which can be transmitted in given time
slot.
TDM works in synchronized mode. Both ends, i.e. Multiplexer and De-multiplexer are
timely synchronized and both switch to next channel simultaneously.

When channel A transmits its frame at one end,the De-multiplexer provides media to
channel A on the other end.As soon as the channel A’s time slot expires, this side switches
to channel B. On the other end, the De-multiplexer works in a synchronized manner and
provides media to channel B. Signals from different channels travel the path in interleaved
manner.
Statistical TDM

Tries to reduce the wasted capacity which is inevitable with TDM when not all channels
have data during every slot.
A statistical multiplexer can support many more devices given the same data rate

The design of a stat TDM depends on the expected utilization of each channel

Potential problem: peak load could exceed capacity, since design is statistical Buffers must
be provided to soak up the extra bits when over capacity.

Wavelength Division Multiplexing


Light has different wavelength (colors). In fiber optic mode, multiple optical carrier
signals are multiplexed into an optical fiber by using different wavelengths. This is an
analog multiplexing technique and is done conceptually in the same manner as FDM but
uses light as signals.

Further, on each wavelength time division multiplexing can be incorporated to


accommodate more data signals.

Code Division Multiplexing


Multiple data signals can be transmitted over a single frequency by using Code Division
Multiplexing. FDM divides the frequency in smaller channels but CDM allows its users to
full bandwidth and transmit signals all the time using a unique code. CDM uses orthogonal
codes to spread signals.
Each station is assigned with a unique code, called chip. Signals travel with these codes
independently, inside the whole bandwidth.The receiver knows in advance the chip code
signal it has to receive.
CDMA Code Division Multiple Access

GSM uses TDM and FDM to divide the channel and dedicate time/space to a user. The
whole point is to avoid multiple stations transmitting on the same frequency at the same
time. An alternative strategy is to use spread spectrum techniques so collisions aren't a
problem. All conversations use all the available bandwidth at the same time.
Divide each bit interval into smaller units called chips. Real schemes use 64 or 128 chips
per bit. Each chip is one of two signals (i.e. it is a binary digital signal).

Each station is given a unique m bit chip sequence. When the station wants to transmit a 1
bit, it sends the m bit sequence. When it wants to transmit a 0 bit, it sends the complement
of its m bit sequence.

FDM strategy

100 channels via FDM gives you 10kHz per channel <frequency domain picture>
Assuming perfect electronics, each conversation is given a 10kHz subchannel.
With a signalling/encoding scheme of 1 bit/Hz each subchannel yields 10k bps.

TDM strategy

The 1Mz channel with a signalling/encoding scheme of 1 bit/Hz yields 1M bps.


This could be divided into 100 bit frames transmitted every 1/10,000 of a second
(10k frames/sec).
Ignoring framing overhead and control, each station could be allocated 1 bit of each
frame, so each conversation sees a rate of 1 bit/frame * 10k frames/sec = 10k bps.

CDMA strategy

Each conversation is given entire 1 MHz channel to use <frequency domain picture>
With the same signaling/encoding scheme of 1 bit/Hz, each conversation now gets
capacity of 1M bps.But with CDMA the signalling rate is chips/sec, and there are m
chips/bit, so the chip rate is greater than the bit rate.

If the chips/bit, m, is equal to 100, then each station sends 1 M bps / 100 chips/bit =
10 k bps.If the chips/bit, m, is less than 100, the actual data rate for each
conversation will be greater than 10k bps.

CDMA can be a more efficient (bits/Hz) scheme for sharing the bandwidth than TDM
or FDM.

Some means has to be found for allowing all stations to transmit simultaneously
across the entire frequency spectrum of the channel. The resulting interference or
collisions must be able to be resolved by the receiver.

Switching is process to forward packets coming in from one port to a port leading towards
the destination. When data comes on a port it is called ingress, and when data leaves a
port or goes out it is called egress. A communication system may include number of
switches and nodes. At broad level, switching can be divided into two major categories:
 Connectionless: The data is forwarded on behalf of forwarding tables. No previous
handshaking is required and acknowledgements are optional.
 Connection Oriented:  Before switching data to be forwarded to destination, there is a need to
pre-establish circuit along the path between both endpoints. Data is then forwarded on that
circuit. After the transfer is completed, circuits can be kept for future use or can be turned
down immediately.

Circuit Switching
When two nodes communicate with each other over a dedicated communication path, it is
called circuit switching.There 'is a need of pre-specified route from which data will travels
and no other data is permitted.In circuit switching, to transfer the data, circuit must be
established so that the data transfer can take place.
Circuits can be permanent or temporary. Applications which use circuit switching may
have to go through three phases:
 Establish a circuit
 Transfer the data
 Disconnect the circuit

Circuit switching was designed for voice applications. Telephone is the best suitable
example of circuit switching. Before a user can make a call, a virtual path between caller
and callee is established over the network.

Message Switching
This technique was somewhere in middle of circuit switching and packet switching. In
message switching, the whole message is treated as a data unit and is switching /
transferred in its entirety.
A switch working on message switching, first receives the whole message and buffers it
until there are resources available to transfer it to the next hop. If the next hop is not
having enough resource to accommodate large size message, the message is stored and
switch waits.

This technique was considered substitute to circuit switching. As in circuit switching the
whole path is blocked for two entities only. Message switching is replaced by packet
switching. Message switching has the following drawbacks:
 Every switch in transit path needs enough storage to accommodate entire message.
 Because of store-and-forward technique and waits included until resources are available,
message switching is very slow.
 Message switching was not a solution for streaming media and real-time applications.

Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching. The entire
message is broken down into smaller chunks called packets. The switching information is
added in the header of each packet and transmitted independently.
It is easier for intermediate networking devices to store small size packets and they do not
take much resources either on carrier path or in the internal memory of switches.
Packet switching enhances line efficiency as packets from multiple applications can be
multiplexed over the carrier. The internet uses packet switching technique. Packet
switching enables the user to differentiate data streams based on priorities. Packets are
stored and forwarded according to their priority to provide quality of service.

2.7 OSI Model


o OSI stands for Open System Interconnection is a reference model that describes how information
from a software application in one computer moves through a physical medium to the software
application in another computer.
o OSI consists of seven layers, and each layer performs a particular network function.
o OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it
is now considered as an architectural model for the inter-computer communications.
o OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a
particular task.
o Each layer is self-contained, so that task assigned to each layer can be performed independently.

Characteristics of OSI Model:


o The OSI model is divided into two layers: upper layers and lower layers.
o The upper layer of the OSI model mainly deals with the application related issues, and they are
implemented only in the software. The application layer is closest to the end user. Both the end user
and the application layer interact with the software applications. An upper layer refers to the layer
just above another layer.
o The lower layer of the OSI model deals with the data transport issues. The data link layer and the
physical layer are implemented in hardware and software. The physical layer is the lowest layer of
the OSI model and is closest to the physical medium. The physical layer is mainly responsible for
placing the information on the physical medium.

Functions of the OSI Layers


There are the seven OSI layers. Each layer has different functions. A list of seven layers are given below:

1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
Physical layer
o The main functionality of the physical layer is to transmit the individual bits from one node to
another node.
o It is the lowest layer of the OSI model.
o It establishes, maintains and deactivates the physical connection.
o It specifies the mechanical, electrical and procedural network interface specifications.

Functions of a Physical layer:


o Line Configuration: It defines the way how two or more devices can be connected physically.
o Data Transmission: It defines the transmission mode whether it is simplex, half-duplex or full-
duplex mode between the two devices on the network.
o Topology: It defines the way how network devices are arranged.
o Signals: It determines the type of the signal used for transmitting the information.

Data-Link Layer

o This layer is responsible for the error-free transfer of data frames.


o It defines the format of the data on the network.
o It provides a reliable and efficient communication between two or more devices.
o It is mainly responsible for the unique identification of each device that resides on a local network.
o It contains two sub-layers:
o Logical Link Control Layer
o It is responsible for transferring the packets to the Network layer of the receiver
that is receiving.
o It identifies the address of the network layer protocol from the header.
o It also provides flow control.
o Media Access Control Layer
o A Media access control layer is a link between the Logical Link Control layer and the
network's physical layer.
o It is used for transferring the packets over the network.

Functions of the Data-link layer


o Framing: The data link layer translates the physical's raw bit stream into packets known as Frames.
The Data link layer adds the header and trailer to the frame. The header which is added to the frame
contains the hardware destination and source address.

o Physical Addressing: The Data link layer adds a header to the frame that contains a destination
address. The frame is transmitted to the destination address mentioned in the header.
o Flow Control: Flow control is the main functionality of the Data-link layer. It is the technique
through which the constant data rate is maintained on both the sides so that no data get corrupted. It
ensures that the transmitting station such as a server with higher processing speed does not exceed
the receiving station, with lower processing speed.
o Error Control: Error control is achieved by adding a calculated value CRC (Cyclic Redundancy
Check) that is placed to the Data link layer's trailer which is added to the message frame before it is
sent to the physical layer. If any error seems to occurr, then the receiver sends the acknowledgment
for the retransmission of the corrupted frames.
o Access Control: When two or more devices are connected to the same communication channel, then
the data link layer protocols are used to determine which device has control over the link at a given
time.

Network Layer
o It is a layer 3 that manages device addressing, tracks the location of devices on the network.
o It determines the best path to move data from source to the destination based on the network
conditions, the priority of service, and other factors.
o The Data link layer is responsible for routing and forwarding the packets.
o Routers are the layer 3 devices, they are specified in this layer and used to provide the routing
services within an internetwork.
o The protocols used to route the network traffic are known as Network layer protocols. Examples of
protocols are IP and Ipv6.

Functions of Network Layer:


o Internetworking: An internetworking is the main responsibility of the network layer. It provides a
logical connection between different devices.
o Addressing: A Network layer adds the source and destination address to the header of the frame.
Addressing is used to identify the device on the internet.
o Routing: Routing is the major component of the network layer, and it determines the best optimal
path out of the multiple paths from source to the destination.
o Packetizing: A Network Layer receives the packets from the upper layer and converts them into
packets. This process is known as Packetizing. It is achieved by internet protocol (IP).

Transport Layer
o The Transport layer is a Layer 4 ensures that messages are transmitted in the order in which they are
sent and there is no duplication of data.
o The main responsibility of the transport layer is to transfer the data completely.
o It receives the data from the upper layer and converts them into smaller units known as segments.
o This layer can be termed as an end-to-end layer as it provides a point-to-point connection between
source and destination to deliver the data reliably.

The two protocols used in this layer are:

o Transmission Control Protocol


o It is a standard protocol that allows the systems to communicate over the internet.
o It establishes and maintains a connection between hosts.
o When data is sent over the TCP connection, then the TCP protocol divides the data into
smaller units known as segments. Each segment travels over the internet using multiple
routes, and they arrive in different orders at the destination. The transmission control
protocol reorders the packets in the correct order at the receiving end.
o User Datagram Protocol
o User Datagram Protocol is a transport layer protocol.
o It is an unreliable transport protocol as in this case receiver does not send any
acknowledgment when the packet is received, the sender does not wait for any
acknowledgment. Therefore, this makes a protocol unreliable.
Functions of Transport Layer:
o Service-point addressing: Computers run several programs simultaneously due to this reason, the
transmission of data from source to the destination not only from one computer to another computer
but also from one process to another process. The transport layer adds the header that contains the
address known as a service-point address or port address. The responsibility of the network layer is
to transmit the data from one computer to another computer and the responsibility of the transport
layer is to transmit the message to the correct process.
o Segmentation and reassembly: When the transport layer receives the message from the upper
layer, it divides the message into multiple segments, and each segment is assigned with a sequence
number that uniquely identifies each segment. When the message has arrived at the destination, then
the transport layer reassembles the message based on their sequence numbers.
o Connection control: Transport layer provides two services Connection-oriented service and
connectionless service. A connectionless service treats each segment as an individual packet, and
they all travel in different routes to reach the destination. A connection-oriented service makes a
connection with the transport layer at the destination machine before delivering the packets. In
connection-oriented service, all the packets travel in the single route.
o Flow control: The transport layer also responsible for flow control but it is performed end-to-end
rather than across a single link.
o Error control: The transport layer is also responsible for Error control. Error control is performed
end-to-end rather than across the single link. The sender transport layer ensures that message reach
at the destination without any error.

Session Layer
o It is a layer 3 in the OSI model.
o The Session layer is used to establish, maintain and synchronizes the interaction between
communicating devices.

Functions of Session layer:


o Dialog control: Session layer acts as a dialog controller that creates a dialog between two processes
or we can say that it allows the communication between two processes which can be either half-
duplex or full-duplex.
o Synchronization: Session layer adds some checkpoints when transmitting the data in a sequence. If
some error occurs in the middle of the transmission of data, then the transmission will take place
again from the checkpoint. This process is known as Synchronization and recovery.

Presentation Layer

o A Presentation layer is mainly concerned with the syntax and semantics of the information
exchanged between the two systems.
o It acts as a data translator for a network.
o This layer is a part of the operating system that converts the data from one presentation format to
another format.
o The Presentation layer is also known as the syntax layer.

Functions of Presentation layer:


o Translation: The processes in two systems exchange the information in the form of character
strings, numbers and so on. Different computers use different encoding methods, the presentation
layer handles the interoperability between the different encoding methods. It converts the data from
sender-dependent format into a common format and changes the common format into receiver-
dependent format at the receiving end.
o Encryption: Encryption is needed to maintain privacy. Encryption is a process of converting the
sender-transmitted information into another form and sends the resulting message over the
network.
o Compression: Data compression is a process of compressing the data, i.e., it reduces the number of
bits to be transmitted. Data compression is very important in multimedia such as text, audio, video.

Application Layer

o An application layer serves as a window for users and application processes to access network
service.
o It handles issues such as network transparency, resource allocation, etc.
o An application layer is not an application, but it performs the application layer functions.
o This layer provides the network services to the end-users.

Functions of Application layer:


o File transfer, access, and management (FTAM): An application layer allows a user to access the
files in a remote computer, to retrieve the files from a computer and to manage the files in a remote
computer.
o Mail services: An application layer provides the facility for email forwarding and storage.
o Directory services: An application provides the distributed database sources and is used to provide
that global information about various objects.

You might also like