Professional Documents
Culture Documents
Application Layer:-
The application layer is present at the top of the OSI model. It is the layer through which
users interact. It provides services to the user.
Application Layer protocol:-
1. TELNET:
Telnet stands for the TELecomunications NETwork. It helps in terminal emulation. It allows
Telnet client to access the resources of the Telnet server. It is used for managing the files on
the internet. It is used for initial set up of devices like switches. The telnet command is a
command that uses the Telnet protocol to communicate with a remote device or system.
Port number of telnet is 23.
2. FTP:
FTP stands for file transfer protocol. It is the protocol that actually lets us transfer files.It
can facilitate this between any two machines using it. But FTP is not just a protocol but it is
also a program.FTP promotes sharing of files via remote computers with reliable and
efficient data transfer. Port number for FTP is 20 for data and 21 for control.
3. TFTP:
The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP, but
it’s the protocol of choice if you know exactly what you want and where to find it. It’s a
technology for transferring files between network devices and is a simplified version of
FTP
4. NFS:
It stands for network file system.It allows remote hosts to mount file systems over a
network and interact with those file systems as though they are mounted locally. This
enables system administrators to consolidate resources onto centralized servers on the
network.
5. SMTP:
It stands for Simple Mail Transfer Protocol. It is a part of the TCP/IP protocol. Using a
process called “store and forward,” SMTP moves your email on and across networks. It
works closely with something called the Mail Transfer Agent (MTA) to send your
communication to the right computer and email inbox. Port number for SMTP is 25.
6. LPD:
It stands for Line Printer Daemon.It is designed for printer sharing.It is the part that
receives and processes the request. A “daemon” is a server or agent.
8. SNMP:
It stands for Simple Network Management Protocol. It gathers data by polling the devices
on
the network from a management station at fixed or random intervals, requiring
them to disclose certain information. It is a way that servers can share information about
their current state, and also a channel through which an administrate can modify pre-
defined values. Port number of SNMP is 161(TCP) and 162(UDP).
9. DNS:
It stands for Domain Name Service. Every time you use a domain name, therefore, a DNS
service must translate the name into the corresponding IP address. For example, the
domain name www.abc.com might translate to 198.105.232.4.
Port number for DNS is 53.
10. DHCP:
It stands for Dynamic Host Configuration Protocol (DHCP).It gives IP addresses to
hosts.There is a lot of information a DHCP server can provide to a host when the host is
registering for an IP address with the DHCP server. Port number for DHCP is 67, 68
1. Centralized systems
2 Decentralized systems
3. Distributed systems
Centralized systems
Centralized systems are systems that use client/server architecture where one or more
client nodes are directly connected to a central server. This is the most commonly used
type of system in many organisations where client sends a request to a company server and
receives the response.
Example –
Wikipedia. Consider a massive server to which we send our
requests and the server responds with the article that we requested. Suppose
we enter the search term ‘junk food’ in the Wikipedia search
bar. This search term is sent as a request to the Wikipedia
servers (mostly located in Virginia, U.S.A) which then responds back with the
articles based on relevance. In this situation, we are the client node,
wikipedia servers are central server.
Characteristics of Centralized System –
P r e s e n c e o f a g
clock: As the entire system consists of a central
node(a server/ a master) and many client nodes(a
computer/ a slave), all client nodes sync up with the global clock(the clock of
the central node).
One single central unit: One single central unit
which serves/coordinates all the other nodes in the system.
Dependent failure of components: Central node failure causes entire system to fail.
This makes sense because when the server is down, no other entity is there to
send/receive response/requests.
Client-Server architecture. The central node that serves the other nodes in the system is the
server node and all the other nodes are the client nodes.
Organisations Using –
National Informatics Center (India), IBM
2. DECENTRALIZED SYSTEMS:
In decentralized systems, every node makes its own decision. The final behavior of the
system is the aggregate of the decisions of the individual nodes. Note that there is no single
entity that receives and responds to the request.
Exa mple –
Bitcoin. Lets take bitcoin for example because its the most
popular use case of decentralized systems. No single
entity/organisation owns the bitcoin network. The network is a
sum of all the nodes who talk to each other for maintaining
the amount of bitcoin every account holder has.
Characteristics of Decentralized System –
Lack of a global clock: Every node is independent of each other
and hence, have different clocks that they run and follow.
Multiple central units (Computers/Nodes/Servers): More than
one central unit which can listen for connections from other nodes
Dependent failure of components: one central node failure causes a part of system
to fail; not the whole system
Use Cases –
Blockchain
Decentralized databases – Entire database split in parts and distributed to different
nodes for storage and use. For example, records with names starting from ‘A’ to ‘K’ in
one node, ‘L’ to ‘N’ in second node and ‘O’ to ‘Z’ in third node
Cryptocurrency
Organisations Using – Bitcoin, Tor network
3. DISTRIBUTED SYSTEMS:
A distributed system allows resource sharing, including software by systems connected to
the network. Examples of distributed systems / applications of distributed computing :
Intranets, Internet, WWW, email.
Example –
Google search system. Each request is worked upon by hundreds of
computers which crawl the web and return the relevant results. To the
user, the Google appears to be one system, but it actually is multiple
computers working together to accomplish one single task (return the
results to the search query).
Characteristics of Distributed System – :
Concurrency of components: Nodes apply consensus protocols
to agree on same values/transactions/commands/logs.
Lack of a global clock: All nodes maintain their own clock.
Independent failure of components: In a distributed system, nodes fail
independently without having a significant effect on the entire system. If one node
fails, the entire system sans the failed node continue to work.
Scaling –
Horizontal and vertical scaling is possible.
Components of Distributed System –
1. Client/Server Model
2. Peer-to-Peer Model
3. Web based Model
4. Emerging File sharing Model: Servent
Using a particular service. Similarly in the digital world a Client is a computer (Host)
i.e. capable of receiving information or using a particular service from the service
providers (Servers).
Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its
present in the database.
A dvantages of Client-Server
model:
Centralized system with
all data in a single place.
Cost efficient requires
less maintenance cost
and Data recovery is
possible.
The capacity of the
Client and Servers can
be changed separately.
Disadvantages of Client-
Server model:
Clients are prone to viruses, Trojans and worms if present in the Server or uploaded
into the Server.
Server are prone to Denial of Service (DOS) attacks.
Data packets may be spoofed or modified during transmission.
Phishing or capturing login credentials or other useful information of the user are
common and MITM(Man in the Middle) attacks are common.
In Computer Networking, P2P is a file sharing technology, allowing the users to access
mainly the multimedia files like videos, music, e-books, games etc. The individual users in
this network are referred to as peers. The peers request for the files from other peers by
establishing TCP or UDP connections.
How P2P works (Overview)
A peer-to-peer network allows computer hardware and software to communicate without
the need for a server. Unlike client-server architecture, there is no central server for
processing requests in a P2P architecture. The peers directly interact with one another
without the requirement of a central server.
Now, when one peer makes a request, it is possible that multiple peers have the copy of
that requested object. Now the problem is how to get the IP addresses of all those peers.
This is decided by the underlying architecture supported by the P2P systems. By means of
one of these methods, the client peer can get to know about all the peers which have the
requested object/file and the file transfer takes place directly between these two peers.
3. Web-based Modeling
A Network-Centric M&S application is the one, the components of which run on different
server computers and communicate over a network (e.g., Internet, virtual private network,
wireless network) using the TCP/IP, HTTP, RMI or another protocol.
The server computers running the models or model components of a network-centric M&S
application can be geographically dispersed or be part of a local area network.
A Web-based M&S application is a network-centric M&S application, which uses the
HyperText Transfer Protocol (HTTP) for the communication among its components over a
network.
Users use client computers to access or interact with the M&S application running on
server computer(s).
Client-Server Architecture and Service-Oriented Architecture are popular ones for building
network-centric M&S applications.
Java Platform, Enterprise Edition (Java EE) and Microsoft .NET Framework are two
industry standard platforms for building network-centric M&S applications.
File sharing is the act of sharing one or more files. These files exist on your computer and
can be sent over a computer network and shared with someone in the same house, a team
member at work, a friend in another country, or yourself so that you can access your files
from anywhere. Files can be shared over a local network in an office or at home, or you can
share files over the internet.
Types of File Sharing
There are two ways to share files over a network: directly between two computers and
between a computer and a server.
When a file is shared between a computer and a server, the computer uploads the file to a
storage area on the server where it can be shared with others. People that want access to
the file download it directly from that server.
When a file is shared between two computers over a network, the file is sent directly to the
other person. This is often called peer-to-peer (P2P) file sharing and works by
communicating directly with the other person's device, with no servers involved.
i) Synchronous
ii) Asynchronous
2. Parllel
Serial Communication
In Telecommunication and Computer Science, serial communication is the process of
sending/receiving data in one bit at a time. It is like you are firing bullets from a machine gun to
a target… that’s one bullet at a time! ;)
Parallel Communication
Parallel communication is the process of sending/receiving multiple data bits at a time through
parallel channels. It is like you are firing using a shotgun to a target – where multiple bullets are
fired from the same gun at a time! ;)
Serial vs Parallel Communication
Now lets have a quick look at the differences between the two types of communications.
1. One data bit is transceived at a time 1. Multiple data bits are transceived at a time
2. Slower 2. Faster
Before the development of high-speed serial technologies, the choice of parallel links over
serial links was driven by these factors:
Speed: Superficially, the speed of a parallel link is equal to bit rate*number of channels. In
practice, clock skew reduces the speed of every link to the slowest of all of the links.
Cable length: Crosstalk creates interference between the parallel lines, and the effect only
magnifies with the length of the communication link. This limits the length of the
communication cable that can be used.
These two are the major factors, which limit the use of parallel communication.
Although a serial link may seem inferior to a parallel one, since it can transmit less data per
clock cycle, it is often the case that serial links can be clocked considerably faster than
parallel links in order to achieve a higher data rate. A number of factors allow serial to be
clocked at a higher rate:
Clock skew between different channels is not an issue (for un-clocked asynchronous serial
communication links).
A serial connection requires fewer interconnecting cables (e.g. wires/fibers) and hence
occupies less space. The extra space allows for better isolation of the channel from its
surroundings.
Crosstalk is not a much significant issue, because there are fewer conductors in proximity.
In many cases, serial is a better option because it is cheaper to implement. Many ICs have
serial interfaces, as opposed to parallel ones, so that they have fewer pins and are therefore
less expensive. It is because of these factors, serial communication is preferred over
parallel communication.
How is Data sent Serially?
Since we already know what are registers and data bits, we would now be talking in these
terms only. If not, I would recommend you to first take a detour and go through the
introduction of this post by Mayank.
When a particular data set is in the microcontroller, it is in parallel form, and any bit can be
accessed irrespective of its bit number. When this data set is transferred into the output
buffer to be transmitted, it is still in parallel form. This output buffer converts this data into
Serial data (PISO) (Parallel In Serial Out), MSB (Most Significant Bit) first or LSB (Least
Significant Bit) first as according to the protocol. Now this data is transmitted in Serial
mode.
When this data is received by another microcontroller in its receiver buffer, the receiver
buffer converts it back into parallel data (SIPO) (Serial In Parallel Out) for further
processing. The following diagram should make it clear.
This is how serial communication works! But it is not as simple as it looks. There is a catch
in it, which we will discuss little later in the same post. For now, lets discuss about two
modes of serial data transfer – synchronous and asynchronous.
Serial Transmission Modes
i) Asynchronous
ii) synchronous.
Data Transfer is called Asynchronous when data bits are not “synchronized” with a clock
line, i.e. there is no clock line at all!
Lets take an analogy. Imagine you are playing a game with your friend where you have to
throw colored balls (let’s say we have only two colors – red (R) and yellow (Y)). Lets
assume you have unlimited number of balls. You have to throw a combination of these
colored balls to your friend. So you start throwing the balls. You throw R, then R, then Y,
then R again and so on. So you start your sequence RRYR… and then you end your round
and start another round. How will your buddy on the other side know that you have
finished sending him first round of balls and that you are already sending him the second
round of balls?? He/she will be completely lost! How nice it would be if you both sit
together and fix a protocol that each round consists of 8 balls! After every 8 balls, you will
throw two R balls to ensure that your friend has caught up with you, and then you again
start your second round of 8 balls. This is what we call asynchronous data transfer.
The first bit is always the START bit (which signifies the start of communication on the
serial line), followed by DATA bits (usually 8-bits), followed by a STOP bit (which signals
the end of data packet). There may be a Parity bit just before the STOP bit. The Parity bit
was earlier used for error checking, but is seldom used these days.
The START bit is always low (0) while the STOP bit is always high (1).
SimplexMode
In Simplex mode, the communication is unidirectional, as on a one-way street. Only one of
the two devices on a link can transmit, the other can only receive. The simplex mode can
use the entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.
Half-Duplex Mode
In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa. The half-duplex
mode is used in cases where there is no need for communication in both direction at the
same time. The entire capacity of the channel can be utilized for each direction.
Example: Walkie- talkie in which message is sent one at a time and messages are sent in
both the directions.
Full-DuplexMode
In full-duplex mode, both stations can transmit and receive simultaneously. In full_duplex
mode, signals going in one direction share the capacity of the link with signals going in
other direction, this sharing can occur in two ways:
Either the link must contain two physically separate transmission paths, one for sending
and other for receiving.
Or the capacity is divided between signals travelling in both directions.
Full-duplex mode is used when communication in both direction is required all the time.
The capacity of the channel, however must be divided between the two directions.
Example: Telephone Network in which there is communication between two persons by a
telephone line, through which both can talk and listen at the same time.
Communication Systems employing electrical signals to convey information from one place
to another over a pair of wires provided an early solution to the problem of fast and
accurate means of long distance communication. Today communication enter in our daily
lives in so many different ways that is easy to overlook the multitude of its facets. The
Mobile phones at our hands, the radio and the television which are the basic and necessary
part of our life are capable of providing rapid Communication from every corner of globe.
Here in this section, analog and digital Communication, which are the major mode of
communication are discussed with explanation of Analog Communication, Digital
Communication and its advantages, disadvantage in details.
Digital Communication
In digital communication, the message signal are transmitted in digital form that means
digital communication involves transmission of data or information in digital form.
Model of a Digital Communication:
The overall purpose of these systems are to message or sequence of symbols that are
coming out from the source to the destination point at a very high data rate and accuracy as
possible. The source and destination points are physically separated in the space and a
communication channel is used to connect the source and the destination.
Model of a Digital Communication
These errors result in extra bits, missing bits, or bits whose states have been
changed.
Delay Distortion:
CHANNEL CAPACITY
The information-carrying capacity of a communications channel is directly proportional to its
bandwidth.
For example, a random stream of bits going across the voice bandwidth has a maximum
capacity of 33,600 bits per second (approx). This is demonstrated by using Shannon's Law.
Narrowband (0-300Hz) channels - used for non-voice service; e.g., teletype writer +
other low speed data transmission.
Wideband or Broadband - used for high speed data facsimile and video
transmission.
The definition of that combination of bits used to represent each character is called
the Data Communications Code.
For example, there may be an 8 bit data communication code in which the letter A is
represented by 10000011 and the number 9 as 01110011.
There are many different codes.
All the codes are based on the use of bits to represent characters. The
number of different characters that can be represented by the code
depends on the number of bits required to form a character.
Switching and multiplexing are both techniques we use to make communication more
economical and scalable.
Switching
Fully connected networks don't scale well, but you still need to let any possible pair of
nodes communicate. Switching is the idea that you can dynamically configure a network
which is less than fully connected in order to join any two nodes for communication.
Two ways to switch: you can establish your own dedicated path (circuit switching) or you
can take whatever path is available at the time you send data (packet switching).
Circuit switching
Circuit switching maintains the idea of dedicated connections between two end points, but
allows for sharing of channels within the network, and hence is much more scaleable.
Circuit switching takes advantage of the fact that while everybody needs to be able to talk
to everybody else, they aren't likely to all do so at the same time.
The telephone network is based on circuit switching. To make a phone call you ask the
PSTN to establish a dedicated circuit for you. It does this by finding unused channels all
along the way through the network and dedicates them to your call. When you actually
start exchanging data (talking) all of your data follows the same path or circuit through the
network. If you pause in your conversation the circuit you're using is idle, wasting
bandwidth. But you never lose data because you have a guaranteed, reserved circuit, so it is
impossible for the system to be too busy to handle your data.
Pros
Messages all follow same route, preserving ordering and (perhaps) inter message timing.
No buffering in the switches (data flows continuously).
Cons
Wasted resources due to idle, dedicated resources.
Potentially large set-up time (round trip time) could be unacceptable for short
message exchange.
If part of the circuit (link or switch) fails your connection is lost.
Poorly matched to traffic since it reserves a fixed amount of capacity.
Circuit routing fixed so can't adapt to changes in network.
Usually assumes symmetrical traffic flows for full-duplex channels.
Alternatives to circuit switching are message switching (store and forward) and
packet switching.
Message Switching
Message switching is the way (at the high level, at least) that USENET articles are
distributed. Each article is sent in its entirety from one news host to another before the
next article is sent. During the time that an article is in transit no other article can be sent.
The potentially large size of messages means that the routers can be busy for long periods
handling one message, and the messages take a lot of storage.
Packet Switching
Packet switching alleviates the problems of circuit and message switching. A small upper
bound is put on the maximum size of a packet sent through the network. No reserved
channel is created ahead of time. Each packet belonging to a single message may take a
different route through the network.
Pros
No set-up time.
No assumption of symmetrical traffic flow for full-duplex.
No wasted resources due to dedicated, idle resources.
Good match to bursty traffic.
Potenial for more robust, adaptive behavior in the event of a down link or router.
Routing algorithm can adapt route per packet based on network loading.
Cons
Multiplexing
Long distance links use high capacity point-to-point connections with a single physical
medium. Some means must be found of sharing the capacity to enable multiple
simultaneous uses of the medium. The motivation is the same here as for time shared
operating systems (one expensive resource shared by many people). Think of the links
between the regional phone switches.
Multiplexing divides a fat pipe into independently useable portions. Switching could be
done without multiplexing, but since the point of switching is to share a network with
fewer links than the fully-connected network, it is quite likely these few links will be fat and
multiplexing will be used.
There are three ways to divide (and hence share) a channel: time, frequency, code.
Possible when a single source requires less than the total available bandwidth in the
medium.
Frequency modulation lets data be moved to any particular part of the frequency spectrum
of the channel. Multiple sources can thus share the same medium, by using different parts
of the channel simultaneously.
Microwave radio and co-axial cable are used (since they have greater bandwidth)
If the data rate of a medium is larger than the data rate (bps) of a source channel, then
multiple channels can be sent by allotting different channels different slices of time.
Multitasking in an OS is TDM of the CPU.
Synchronous TDM refers to the fact that a time slot is reserved for each source
this allows for a guaranteed portion of the data rate to go to each channel
(important for sending things like voice and video)
has the potential to waste data capacity (if the source doesn't need its slot)
sources need not all be the same speed
The issues of link control (flow, error control) are handled per channel, independent of the
TDM
Example: TDM Carrier System
Long haul TDM transmission is based on PCM for voice DS-1 (most usually called
T1)
4kHz => 8000 samples/second required
8 bits/channel * 24 channels 1 framing bit = 193 bits
8000 samples/second * 193 bits = 1.544 Mbps
DS-1 24 1.544 M T1
DS-2 96 6.312M T2
When channel A transmits its frame at one end,the De-multiplexer provides media to
channel A on the other end.As soon as the channel A’s time slot expires, this side switches
to channel B. On the other end, the De-multiplexer works in a synchronized manner and
provides media to channel B. Signals from different channels travel the path in interleaved
manner.
Statistical TDM
Tries to reduce the wasted capacity which is inevitable with TDM when not all channels
have data during every slot.
A statistical multiplexer can support many more devices given the same data rate
The design of a stat TDM depends on the expected utilization of each channel
Potential problem: peak load could exceed capacity, since design is statistical Buffers must
be provided to soak up the extra bits when over capacity.
GSM uses TDM and FDM to divide the channel and dedicate time/space to a user. The
whole point is to avoid multiple stations transmitting on the same frequency at the same
time. An alternative strategy is to use spread spectrum techniques so collisions aren't a
problem. All conversations use all the available bandwidth at the same time.
Divide each bit interval into smaller units called chips. Real schemes use 64 or 128 chips
per bit. Each chip is one of two signals (i.e. it is a binary digital signal).
Each station is given a unique m bit chip sequence. When the station wants to transmit a 1
bit, it sends the m bit sequence. When it wants to transmit a 0 bit, it sends the complement
of its m bit sequence.
FDM strategy
100 channels via FDM gives you 10kHz per channel <frequency domain picture>
Assuming perfect electronics, each conversation is given a 10kHz subchannel.
With a signalling/encoding scheme of 1 bit/Hz each subchannel yields 10k bps.
TDM strategy
CDMA strategy
Each conversation is given entire 1 MHz channel to use <frequency domain picture>
With the same signaling/encoding scheme of 1 bit/Hz, each conversation now gets
capacity of 1M bps.But with CDMA the signalling rate is chips/sec, and there are m
chips/bit, so the chip rate is greater than the bit rate.
If the chips/bit, m, is equal to 100, then each station sends 1 M bps / 100 chips/bit =
10 k bps.If the chips/bit, m, is less than 100, the actual data rate for each
conversation will be greater than 10k bps.
CDMA can be a more efficient (bits/Hz) scheme for sharing the bandwidth than TDM
or FDM.
Some means has to be found for allowing all stations to transmit simultaneously
across the entire frequency spectrum of the channel. The resulting interference or
collisions must be able to be resolved by the receiver.
Switching is process to forward packets coming in from one port to a port leading towards
the destination. When data comes on a port it is called ingress, and when data leaves a
port or goes out it is called egress. A communication system may include number of
switches and nodes. At broad level, switching can be divided into two major categories:
Connectionless: The data is forwarded on behalf of forwarding tables. No previous
handshaking is required and acknowledgements are optional.
Connection Oriented: Before switching data to be forwarded to destination, there is a need to
pre-establish circuit along the path between both endpoints. Data is then forwarded on that
circuit. After the transfer is completed, circuits can be kept for future use or can be turned
down immediately.
Circuit Switching
When two nodes communicate with each other over a dedicated communication path, it is
called circuit switching.There 'is a need of pre-specified route from which data will travels
and no other data is permitted.In circuit switching, to transfer the data, circuit must be
established so that the data transfer can take place.
Circuits can be permanent or temporary. Applications which use circuit switching may
have to go through three phases:
Establish a circuit
Transfer the data
Disconnect the circuit
Circuit switching was designed for voice applications. Telephone is the best suitable
example of circuit switching. Before a user can make a call, a virtual path between caller
and callee is established over the network.
Message Switching
This technique was somewhere in middle of circuit switching and packet switching. In
message switching, the whole message is treated as a data unit and is switching /
transferred in its entirety.
A switch working on message switching, first receives the whole message and buffers it
until there are resources available to transfer it to the next hop. If the next hop is not
having enough resource to accommodate large size message, the message is stored and
switch waits.
This technique was considered substitute to circuit switching. As in circuit switching the
whole path is blocked for two entities only. Message switching is replaced by packet
switching. Message switching has the following drawbacks:
Every switch in transit path needs enough storage to accommodate entire message.
Because of store-and-forward technique and waits included until resources are available,
message switching is very slow.
Message switching was not a solution for streaming media and real-time applications.
Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching. The entire
message is broken down into smaller chunks called packets. The switching information is
added in the header of each packet and transmitted independently.
It is easier for intermediate networking devices to store small size packets and they do not
take much resources either on carrier path or in the internal memory of switches.
Packet switching enhances line efficiency as packets from multiple applications can be
multiplexed over the carrier. The internet uses packet switching technique. Packet
switching enables the user to differentiate data streams based on priorities. Packets are
stored and forwarded according to their priority to provide quality of service.
1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
Physical layer
o The main functionality of the physical layer is to transmit the individual bits from one node to
another node.
o It is the lowest layer of the OSI model.
o It establishes, maintains and deactivates the physical connection.
o It specifies the mechanical, electrical and procedural network interface specifications.
Data-Link Layer
o Physical Addressing: The Data link layer adds a header to the frame that contains a destination
address. The frame is transmitted to the destination address mentioned in the header.
o Flow Control: Flow control is the main functionality of the Data-link layer. It is the technique
through which the constant data rate is maintained on both the sides so that no data get corrupted. It
ensures that the transmitting station such as a server with higher processing speed does not exceed
the receiving station, with lower processing speed.
o Error Control: Error control is achieved by adding a calculated value CRC (Cyclic Redundancy
Check) that is placed to the Data link layer's trailer which is added to the message frame before it is
sent to the physical layer. If any error seems to occurr, then the receiver sends the acknowledgment
for the retransmission of the corrupted frames.
o Access Control: When two or more devices are connected to the same communication channel, then
the data link layer protocols are used to determine which device has control over the link at a given
time.
Network Layer
o It is a layer 3 that manages device addressing, tracks the location of devices on the network.
o It determines the best path to move data from source to the destination based on the network
conditions, the priority of service, and other factors.
o The Data link layer is responsible for routing and forwarding the packets.
o Routers are the layer 3 devices, they are specified in this layer and used to provide the routing
services within an internetwork.
o The protocols used to route the network traffic are known as Network layer protocols. Examples of
protocols are IP and Ipv6.
Transport Layer
o The Transport layer is a Layer 4 ensures that messages are transmitted in the order in which they are
sent and there is no duplication of data.
o The main responsibility of the transport layer is to transfer the data completely.
o It receives the data from the upper layer and converts them into smaller units known as segments.
o This layer can be termed as an end-to-end layer as it provides a point-to-point connection between
source and destination to deliver the data reliably.
Session Layer
o It is a layer 3 in the OSI model.
o The Session layer is used to establish, maintain and synchronizes the interaction between
communicating devices.
Presentation Layer
o A Presentation layer is mainly concerned with the syntax and semantics of the information
exchanged between the two systems.
o It acts as a data translator for a network.
o This layer is a part of the operating system that converts the data from one presentation format to
another format.
o The Presentation layer is also known as the syntax layer.
Application Layer
o An application layer serves as a window for users and application processes to access network
service.
o It handles issues such as network transparency, resource allocation, etc.
o An application layer is not an application, but it performs the application layer functions.
o This layer provides the network services to the end-users.