You are on page 1of 22

OSI model (Open Systems

Interconnection)

Posted by: Margaret Rouse


WhatIs.com
  
OSI (Open Systems Interconnection) is a reference model for how
applications communicate over a network.

A reference model is a conceptual framework for understanding relationships.


The purpose of the OSI reference model is to guide vendors and developers
so the digital communication products and software programs they create
can interoperate, and to facilitate a clear framework that describes the
functions of a networking or telecommunication system.

Most vendors involved in telecommunications make an attempt to describe


their products and services in relation to the OSI model. And although it is
useful for guiding discussion and evaluation, OSI is rarely actually
implemented as-is. That's because few network products or standard tools
keep related functions together in well-defined layers, as is the case in the
OSI model. The TCP/IP protocol suite, which defines the internet, does not
map cleanly to the OSI model.

How the OSI model works


IT professionals use OSI to model or trace how data is sent or received over a
network. This model breaks down data transmission over a series of seven
layers, each of which is responsible for performing specific tasks concerning
sending and receiving data.

The main concept of OSI is that the process of communication between two
endpoints in a network can be divided into seven distinct groups of related
functions, or layers. Each communicating user or program is on a device that
can provide those seven layers of function.

In this architecture, each layer serves the layer above it and, in turn, is served
by the layer below it. So, in a given message between users, there will be a
flow of data down through the layers in the source computer, across the
network, and then up through the layers in the receiving computer. Only the
application layer, at the top of the stack, doesn’t provide services to a higher-
level layer.

The seven layers of function are provided by a combination of


applications, operating systems, network card device drivers and networking
hardware that enable a system to transmit a signal over a network Ethernet or
fiber optic cable or through Wi-Fi or other wireless protocols.
TechTarget
7 layers of the OSI model

The seven Open Systems Interconnection layers are:


Layer 7: The application layer: Enables the user (human or software) to
interact with the application or network whenever the user elects to read
messages, transfer files or perform other network-related activities. Web
browsers and other internet-connected apps, such as Outlook and Skype, use
Layer 7 application protocols.

Layer 6: The presentation layer: Translates or formats data for the application
layer based on the semantics or syntax that the application accepts. This layer
is also able to handle the encryption and decryption that the application layer
requires.

Layer 5: The session layer: Sets up, coordinates and terminates


conversations between applications. Its services include authentication and
reconnection after an interruption. This layer determines how long a system
will wait for another application to respond. Examples of session layer
protocols include X.225, AppleTalk and Zone Information Protocol (ZIP).

Layer 4: The transport layer: Is responsible for transferring data across a


network and provides error-checking mechanisms and data flow controls. It
determines how much data to send, where it gets sent and at what rate. The
Transmission Control Protocol is the best known example of the transport
layer.

Layer 3: The network layer: Primary function is to move data into and through
other networks. Network layer protocols accomplish this by packaging data
with correct network address information, selecting the appropriate network
routes and forwarding the packaged data up the stack to the transport layer.

Layer 2: The data-link layer: The protocol layer in a program that handles the
moving of data into and out of a physical link in a network. This layer handles
problems that occur as a result of bit transmission errors. It ensures that the
pace of the data flow doesn’t overwhelm the sending and receiving devices.
This layer also permits the transmission of data to Layer 3, the network layer,
where it is addressed and routed.

Layer 1: The physical layer: Transports data using electrical, mechanical or


procedural interfaces. This layer is responsible for sending computer bits from
one device to another along the network. It determines how physical
connections to the network are set up and how bits are represented into
predictable signals as they are transmitted either electrically, optically or via
radio waves.

Cross layer functions

Cross-layer functions, services that may affect more than one layer, include:

 Security service (telecommunication) as defined by ITU-T X.800


recommendation.

 Management functions -- functions that enable the configuration,


instantiation, monitoring and terminating of the communications of two or
more entities.

 Multiprotocol Label Switching (MPLS) -- operates at an OSI-model layer


that lies between layer 2 (data link layer) and layer 3 (network layer).
MPLS can be used to carry a variety of traffic, including Ethernet frames
and IP packets.

 ARP -- translates IPv4 addresses (OSI layer 3) into Ethernet MAC


addresses (OSI layer 2).

 Domain name service – an application layer service used to look up the


IP address of a domain name.
History of the OSI model

Developed by representatives of major computer and telecommunication


companies beginning in 1983, OSI was originally intended to be a detailed
specification of actual interfaces. Instead, the committee decided to establish
a common reference model that others could then use to develop detailed
interfaces, which, in turn, could become standards governing the transmission
of data packets. The OSI architecture was officially adopted as an
international standard by the International Organization for Standardization
(ISO) in 1984.

OSI model vs. TCP/IP model

OSI is a reference model that describes the functions of a telecommunication


or networking system, while TCP/IP is a suite of communication protocols
used to interconnect network devices on the internet. TCP/IP and OSI are the
most broadly used networking models for communication.

The OSI and TCP/IP models have similarities and differences. The main
similarity is in their construction as both use layers, although the OSI model
consists of seven layers, while TCP/IP consists of just four layers.

Another similarity is that the upper layer for each model is the application
layer, which performs the same tasks in each model, but may vary according
to the information each receives.

The functions performed in each model are also similar because each uses a
network layer and transport to operate. The OSI and TCP/IP model are each
mostly used to transmit data packets. Although they will do so by different
means and by different paths, they will still reach their destinations.

The OSI and TCP/IP models are similar in that they:

 are logical models.

 define standards for networking.

 divide the network communication process in layers.


 provide frameworks for creating and implementing networking standards
and devices.

 enable one manufacturer to make devices and network components


that can coexist and work with the devices and components made by other
manufacturers.

 divide complex functions into simpler components.

Differences between the OSI model and TCP/IP model include:

 OSI has seven layers while the TCP/IP has four layers.

 OSI uses three layers (application, presentation and session) to define


the functionality of upper layers, while TCP/IP uses just one layer
(application).

 OSI uses two separate layers (physical and data link) to define the
functionality of the bottom layers while TCP/IP uses one layer (link).

 OSI uses the network layer to define the routing standards and
protocols, while TCP/IP uses the Internet layer.
Pros and cons of the OSI model

The OSI model has a number of advantages, including:

 It’s considered a standard model in computer networking.

 Supports connectionless as well as connection-oriented services. Users


can leverage connectionless services when they need faster data
transmissions over the internet and the connection-oriented model when
they’re looking for reliability.

 Has the flexibility to adapt to many protocols

 More adaptable and secure than having all services bundled in one


layer.
The disadvantages include:

 Doesn’t define any particular protocol.

 Session layer, which is used for session management, and the


presentation layer, which deals with user interaction aren’t as useful as
other layers in the OSI model.

 Some services are duplicated at various layers, such as the transport


and data link layers each have an error control mechanism.

 Layers can’t work in parallel; each layer has to wait to receive data from
the previous layer.

The TCP/IP Reference Model


TCP/IP means Transmission Control Protocol and Internet Protocol. It is the network
model used in the current Internet architecture as well. Protocols are set of rules which
govern every possible communication over a network. These protocols describe the
movement of data between the source and destination or the internet. They also offer
simple naming and addressing schemes.

Protocols and networks in the TCP/IP model:


Overview of TCP/IP reference model
TCP/IP that is Transmission Control Protocol and Internet Protocol was developed by
Department of Defence's Project Research Agency (ARPA, later DARPA) as a part of
a research project of network interconnection to connect remote machines.
The features that stood out during the research, which led to making the TCP/IP
reference model were:

 Support for a flexible architecture. Adding more machines to a network was easy.
 The network was robust, and connections remained intact untill the source and
destination machines were functioning.

The overall idea was to allow one application on one computer to talk to(send data
packets) another application running on different computer.

Different Layers of TCP/IP Reference


Model
Below we have discussed the 4 layers that form the TCP/IP reference model:

Layer 1: Host-to-network Layer

1. Lowest layer of the all.


2. Protocol is used to connect to the host, so that the packets can be sent over it.
3. Varies from host to host and network to network.

Layer 2: Internet layer

1. Selection of a packet switching network which is based on a connectionless


internetwork layer is called a internet layer.
2. It is the layer which holds the whole architecture together.
3. It helps the packet to travel independently to the destination.
4. Order in which packets are received is different from the way they are sent.
5. IP (Internet Protocol) is used in this layer.
6. The various functions performed by the Internet Layer are:
o Delivering IP packets
o Performing routing
o Avoiding congestion

Layer 3: Transport Layer

1. It decides if data transmission should be on parallel path or single path.


2. Functions such as multiplexing, segmenting or splitting on the data is done by
transport layer.
3. The applications can read and write to the transport layer.
4. Transport layer adds header information to the data.
5. Transport layer breaks the message (data) into small units so that they are
handled more efficiently by the network layer.
6. Transport layer also arrange the packets to be sent, in sequence.

Layer 4: Application Layer


The TCP/IP specifications described a lot of applications that were at the top of the
protocol stack. Some of them were TELNET, FTP, SMTP, DNS etc.

1. TELNET is a two-way communication protocol which allows connecting to a


remote machine and run applications on it.
2. FTP(File Transfer Protocol) is a protocol, that allows File transfer amongst
computer users connected over a network. It is reliable, simple and efficient.
3. SMTP(Simple Mail Transport Protocol) is a protocol, which is used to transport
electronic mail between a source and destination, directed via a route.
4. DNS(Domain Name Server) resolves an IP address into a textual address for
Hosts connected over a network.
5. It allows peer entities to carry conversation.
6. It defines two end-to-end protocols: TCP and UDP
o TCP(Transmission Control Protocol): It is a reliable connection-oriented
protocol which handles byte-stream from source to destination without
error and flow control.
o UDP(User-Datagram Protocol): It is an unreliable connection-less
protocol that do not want TCPs, sequencing and flow control. Eg: One-
shot request-reply kind of service.
Merits of TCP/IP model
1. It operated independently.
2. It is scalable.
3. Client/server architecture.
4. Supports a number of routing protocols.
5. Can be used to establish a connection between two computers.

Demerits of TCP/IP
1. In this, the transport layer does not guarantee delivery of packets.
2. The model cannot be used in any other application.
3. Replacing protocol is not easy.
4. It has not clearly separated its services, interfaces and protocols.

What are Proxy Servers and


how do they protect computer
networks?
The IP address which is given to your system by the ISP(Internet
Service Providers) is used to uniquely identify your system. But there
are many risks involved with the IP addresses like they might find
personal information about you, spam you with “Personalized Ads”,
etc. So, Proxy Servers or VPNs can be used to overcome this
problem. In this blog, we will learn how the proxy server will protect
your computer network. So, let's get started.

Proxy Server
The word proxy literally means a substitute. A proxy server
substitutes the IP address of your computer with some substitute IP
address. If you can't access a website from your computer or you
want to access that website anonymously because you want your
identity to be hidden or you don't trust that website then you can use
a proxy. These proxy servers are dedicated computer systems or
software running on a computer system that acts as
an intermediary separating the end-users from the server. These
proxy servers have special popularity among countries like China
where the government has banned connection to some specific
websites.

How does a proxy server work?


Every computer on the network has a unique IP address. This IP
address is analogous to your street address which must be known by
the post office in order to deliver your parcel to your home. A proxy
server is a computer on the internet with its own IP address and the
client which is going to use this proxy server knows this IP address.
Whenever the client makes any request to any web server then its
request first goes to this proxy server. This proxy server then makes a
request to the destination server on behalf of the client. The proxy
server actually changes the IP address of the client so that the
actual IP address of the client is not revealed to the webserver. The
proxy server then collects the response from the webserver and
forwards the result to the client and the client can see the result in its
web browser.
Types of Proxy servers
1. Anonymous Proxy: An anonymous proxy is the most familiar
type of proxy server and it hides the original IP address of the
client and passes any anonymous IP address to the web server
while making the request. By doing this, there is no way that the
end-user receiving the request can find out the location from
where the request was made. That's why most people use a
proxy. This helps in preventing the identity thefts and keep your
browsing habits private.
2. High Anonymity Proxy: These types of servers change the IP
address periodically This makes it very difficult for the
webserver to keep a track of which IP address belongs to
whom. TOR network is an example of a high anonymity proxy
server. A high anonymity server has an advantage over the
anonymous proxy in terms of privacy and security.
3. Transparent Proxy Server: As the name suggests this proxy
server will pass your IP address to the webserver. This is not
used for the same purpose as the above two rather it is used for
resource sharing. They are mainly used in public libraries and
schools for content filtering. Example: If the students of a
school are viewing the same article again and again via their
school network, then it would be more efficient to cache the
content and serve the next request from the cache. A
transparent proxy can do this for the organizations.
4. Reverse Proxy: Here the goal of the proxy server is not to
protect you from while accessing the webpages but to stop the
others on the internet form freely accessing your network. The
most basic application of reverse proxy is that it protects the
company resources and data on individual computers by
stoping third party access on these computers.
Advantages of Using Proxy Servers
1. Privacy: Individuals and organizations use the proxy server so
that they can browse the internet more privately. The use of
proxy protects them from identity theft and keeps
their browsing habits safe. Many ISPs also collect the data of
your browsing history and sell it to the retailers and
government.
2. Access to Blocked Resources: Several governments around
the world restrict access to citizens to many websites and proxy
servers provide access to the uncensored internet.
Example: The Chinese government has blocked access to
many websites for its citizens but a proxy server is all they need.
3. Speed up Internet Surfing: Proxy Server also caches the
data. So, if you ask for the website afteracademy.com then your
request will first reach the proxy server. Proxy Server checks if it
has cached this website. If it has cached it then you will get
feedback from the proxy server which will be faster than directly
accessing the website.
4. To control Internet usage of Employee and
Children: Organisations can use proxy servers and stop the
employees from accessing certain websites (like Facebook )in-
office hours. Parents can also use a proxy server to monitor how
their children use the internet.

Risk of Using a Proxy Server


1. The most common risk is spyware that gets downloads as free
software. This is where the quality of your proxy server becomes
more important. The proxy providers should have advanced
security protocols so that the spyware is left passive and can't do
any harm to our system even though installed. The proxy
servers which don't have such kind of security allow the spyware
to send your computer information and other personal data
leaving the proxy servers useless.
2. The proxy server has your IP address and web request saved
mostly unencrypted. You must know if they save and log your
data. You must know what are the policies that they follow. It is
possible that they might sell your data to the vendors.
3. This risk out of your control as the hackers can take control of
the proxy server and monitor and change the data that comes
over the proxy server.
4. ipconfig (internet protocol configuration) in Microsoft
Windows is a console application that displays all current TCP/IP
network configuration values and can modify Dynamic Host
Configuration Protocol DHCP and Domain Name System DNS
settings.
5.
6. ifconfig (short for interface configuration) is a system
administration utility in Unix-like operating systems to configure,
control, and query TCP/IP network interface parameters from a
command line interface (CLI) or in system configuration scripts.

Ipconfig (sometimes written as IPCONFIG) is a command line tool used to control


the network connections on Windows NT/2000/XP machines. There are three main
commands: "all", "release", and "renew". Ipconfig displays all
current TCP/IP network configuration values and refreshes Dynamic Host
Configuration Protocol (DHCP) and Domain Name System (DNS) settings. Used
without parameters, ipconfig displays the IP address, subnet mask, and default
gateway for all adapters.

Factors That Affect The Performance Of Networks


We talked about what a network is in the previous section, but it is also important to understand
how the performance of network is affected.
How do we calculate the speed of data in a network?
Hover over this box to reveal the answer
Bit Rate x Bandwidth
The bit rate is the speed of 1 bit multiplied by the volume of data that can be sent at once gives the total speed.

More Revision
When we talk about the performance of a network, we mean how fast data is able to transfer
from one device in the network to another. The time taken for the data to be requested and then
sent is known as latency.

Often we refer to the delay in receiving data as lag.

 
There are a number of factors that could affect the speed of data transfer in the network:

Bandwidth
Imagine the cables in the network are a little like a river. If there are two rivers where the water
is flowing at the same speed, but one is wider, more data can flow down the second river even
though the water is travelling at the same speed.

In the same way, the bandwidth is the volume of data that can travel along media at the same
time.

Bit Rate
Where bandwidth measures the volume of data that can travel at once, bit rate measures how fast
the data can travel. This is calculated by measuring bits per second.

Number of Users
Have you ever noticed that your internet slows down at the weekend, or when something
exciting is happening on TV? The reason for this is likely to be something to do with how many
people are using the network.

Devices inside a network will be sharing the available bandwidth. If there are 4 devices on a
network, then they will receive 1/4 of the bandwidth each*. As more devices join the network,
the bandwidth is divided in to smaller and smaller amounts for each device eventually making
the network noticably slower.

 
*This is actually a little bit simplistic, as the router is intelligent enough to divide the bandwidth
depending on what each device is doing.

Type of Network Media


Media in this case refers to the type of cable or wireless connection. As a rule, wired connections
tend to be faster, but less portable. Copper cable is faster than WiFi, however fibre optic cable is
much faster than ethernet (a common type of copper cable).

Type of Error Checking


To ensure that data is not corrupted, there are a number of methods that can be used to check that
the data is correct. Error checking methods such as Echo back (where the receiver sends a copy
of the data back) and majority voting (where the data is sent three times) cause additional data to
be sent through the network and can cause congestion.

Poor Hardware Planning


When a network is set up, bottlenecks can be created in the network by attempting to route too
much network traffic through a single point. This can often be remedied by adding additional
routers or switches.

Factors that affect the


performance of networks
Network performance is about response time - how fast a message can be sent or how quickly a
document can be retrieved. The performance of a network can be affected by various factors:

 the number of devices on the network


 the bandwidth of the transmission medium
 the type of network traffic
 network latency
 the number of transmission errors
Any network can be affected by one or a combination of these factors.

Bandwidth is a measure of the amount of data that the medium can transfer over a given period
of time. Each transmission medium has a different bandwidth:
Medium Typical bandwidth

Twisted copper wire Up to 1 gigabit (Gb) per second

Fibre-optic cable Over 40 terabits (Tb) per second

Wi-Fi (home networks) 54 megabits (Mb) per second

Business Wi-Fi Up to 1 gigabit per second

Each connected device requires bandwidth to be able to communicate. The bandwidth of the
medium is shared between each connected device. For example, a home Wi-Fi network with one
device would allocate 54 Mb per second to that device. If a second device joins the network, the
bandwidth would be split between the two, giving 27 Mb per second to each, and so on. If ten
devices were connected, the bandwidth allocated to each device would drop to 5.4 Mb per
second, thereby reducing the rate at which data can be sent to any particular device.

In reality, however, things are more complicated. Different types of network traffic usually have
different bandwidth requirements. For example, streaming a high definition video requires more
bandwidth than streaming a low definition video. Some network switches are capable of
determining the type of traffic and adjusting the bandwidth allocated to a particular device to
accommodate the traffic's requirements.

Latency
Network latency is a measure of how long it takes a message to travel from one device to another
across a network. A network with low latency experiences few delays in transmission, whereas a
high latency network experiences many delays. The more delays there are, the longer it takes to
transmit data across a network.

Latency is affected by the number of devices on the network and the type of connection device.

A hub-based network will usually experience higher latency than a switch-based network
because hubs broadcast all messages to all devices. Switch-based networks transmit messages
only to the intended recipient.

The greater the number of devices connected to a network, the more important the choice of
transmission medium becomes. Wi-Fi generally handles less traffic than twisted copper wire
(TCW), which in turn handles less traffic than fibre-optic cable. Many networks include a
combination of all three media:

 fibre-optic cables allow high data transmission between different buildings


 TCW runs from switches within buildings to individual devices
 Wi-Fi allows guest devices to connect to the network

Transmission errors
Inevitably there will be times when devices try to communicate with each other at the same time.
Their signals collide with each other and the transmission fails. It is similar to when two people
speak to each other simultaneously - neither person is able to clearly hear what the other person
is saying.

A collision occurs when two devices on a network try to communicate simultaneously along
the same communication channel.
The greater the number of devices on a network, the more chance of a collision occurring, and
the longer it takes to transmit a

DEVICE DRIVER 
A device driver is a particular form of software application that is designed to enable
interaction with hardware devices. Without the required device driver, the corresponding
hardware device fails to work.
A device driver usually communicates with the hardware by means of the
communications subsystem or computer bus to which the hardware is connected.
Device drivers are operating system-specific and hardware-dependent. A device driver
acts as a translator between the hardware device and the programs or operating
systems that use it.
A device driver may also be called a software driver.

INTERRUPTS
In system programming, an interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention. An interrupt alerts the processor
to a high-priority condition requiring the interruption of the current code the processor is
executing. The processor responds by suspending its current activities, saving its state, and
executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with
the event. This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.

PROCESS STATE
A process which is Executed by the Process have various States, the State of the
Process is also called as the Status of the process, The Status includes whether the
Process has Executed or Whether the process is Waiting for Some input and output
from the user and whether the Process is Waiting for the CPU to Run the Program after
the Completion of the Process.
The various States of the Process are as Followings:-
1) New State : When a user request for a Service from the System , then the
System will first initialize the process or the System will call it an initial Process . So
Every new Operation which is Requested to the System is known as the New Born
Process.
2) Running State : When the Process is Running under the CPU, or When the
Program is Executed by the CPU , then this is called as the Running process and when
a process is Running then this will also provides us Some Outputs on the Screen.
3) Waiting : When a Process is Waiting for Some Input and Output Operations then this
is called as the Waiting State. And in this process is not under the Execution instead the
Process is Stored out of Memory and when the user will provide the input then this will
Again be on ready State.
4) Ready State : When the Process is Ready to Execute but he is waiting for the CPU
to Execute then this is called as the Ready State. After the Completion of the Input and
outputs the Process will be on Ready State means the Process will Wait for the
Processor to Execute.
5) Terminated State : After the Completion of the Process , the Process will be
Automatically terminated by the CPU . So this is also called as the Terminated State of
the Process. After Executing the Whole Process the Processor will Also deallocate the
Memory which is allocated to the Process. So this is called as the Terminated Process.
As we know that there are many processes those are running at a Time, this is
not true. A processor can execute only one Process at a Time. There are the various
States of the Processes those determined which Process will be executed. The
Processor will Execute all the processes by using the States of the Processes, the
Processes those are on the Waiting State will not be executed and CPU will Also divides
his time for Execution if there are Many Processes those are Ready to Execute.
When a Process Change his State from one State to Another, then this is also
called as the Process State Transition. In this a Running Process may goes on Wait
and a ready Process may goes on the Wait State and the Wait State can be goes on the
Running State.
 

You might also like