You are on page 1of 8

CSE-433

Computer
Submitted by: Arya Haldar 17074004 Networks Mid
CSE IDD Part IV
Semester
Examination
2020-21

1. What is the OSI Model? Example with the difference from TCP/IP?
The OSI model, also known as Open System Interconnection, starts from the bottom and works its way up; the
first layer, also known as the physical layer, handles raw bits' transportation over a connection. The data link
layer
comes after this; it accumulates a stream of bits into a larger whole called a frame. Network adaptors and
device
drivers running in the node's operating system are typically implemented over the data link level. Instead of raw bits
being delivered, it is the frames that are addressed to the host.
The network layer supervises routing between the links within a packet-switched interface. Here the data
bits are called packets instead of frames. The first three layers are realized over the network nodes, including
switches within the network and hosts connected to the network's exterior.
The transport layer then completes what we have up to this point, been calling a process-to-process
channel. The data bits transferred here is called a message. The transport layer and more top layers typically
operate only on the end-hosts and not on the middle switches or routers.
Now, we look from top to down, and we find the application layer. Application layer protocols include the
Hypertext Transfer Protocol (HTTP), which is the basis of the World Wide Web and enables web
browsers to
inquire pages from web servers. Below that, the presentation layer is involved with the construction of
data
exchanged between peers—for example, whether the most significant byte is transmitted first or last, whether
an
integer is 16, 32, or 64 bits long, or how a video stream is formatted. Finally, the session layer or the 4th
layer
provides a namespace used to tie together the potentially different transport streams of a single application. For
example, it might operate an audio stream and a video stream combined in a teleconferencing
application.
The TCP/IP architecture is also known as the Internet architecture. The Internet's application layer is
considered to
be at layer 7, and its transport layer is layer 4, the IP (internetworking or just network) layer is layer 3, and the link
or subnet layer below IP is layer 2.

The internet architecture has two main differences over the OSI model:
● The application is free to bypass the defined transport layers and to directly use IP or one of the
underlying
networks. In fact, programmers are free to define new channel abstractions or applications that run on
top
of any of the existing protocols.
● IP serves as the focal point for the architecture. It defines a common method for exchanging packets
among a wide collection of networks. Above IP there can be arbitrarily many transport protocols, each
offering a different channel abstraction to application programs. Thus, the issue of delivering messages
host is completely separated from the issue of providing a useful process-to-process communication
from host to
service. Below IP, the architecture allows for arbitrarily many different network technologies, ranging
Ethernet to wireless to single point-to-point links.
2.
3.
4.
5.
6. Describe a protocol combining the sliding window algorithm with selective ACKs. Your protocol
should retransmit promptly, but not if a frame simply arrives one or two positions out of order. Your
protocol should also make explicit what happens if several consecutive frames are lost
If frame[N] arrives, the receiver sends ACK[N] if NFE=N; otherwise, if N was in the receive window, the receiver
sends SACK[N].
The sender keeps a bucket of values of N>LAR for which SACK[N] was received; note that whenever LAR slides
forward, this bucket will have to be purged of all N≤LAR.
If the bucket contains one or two values, these could be attributed to out-of-order delivery. However, the sender
might reasonably assume that whenever there was an N>LAR with frame[N] unacknowledged but with three, say,
later SACKs in the bucket, then frame[N] was lost. (The number three here is taken from TCP with fast
retransmit,
which uses duplicate ACKs instead of SACKs.) Retransmission of such frames might then be in order. (TCP’s
fast-retransmit strategy would only retransmit frame[LAR+1].)

7. Draw a timeline diagram for the sliding window algorithm SWS=RWS=3 frames, for the following
situations. Use a timeout interval of 2x RTT
i) Frame 4 is lost
ii) Frame 4 to 6 are lost
The right diagram, for part (b), shows each of frames 4-6 timing out after a 2×RTT timeout interval; a more realistic
implementation (e.g. TCP) would probably revert to SWS=1 after losing packets, to address both congestion control
and the lack of ACK clocking.
8. Suppose the following sequence of bits arrives over a link:
011010111110101001111111011001111110 Show the resulting frame after any stuffed bits have been
removed. Indicate any errors that might have been introduced into the frame.

Let ^ mark each position where a stuffed 0 bit was removed. There was one error where the seven consecutive 1s are
detected (err). At the end of the bit sequence, the end-of-frame was detected (eof).
01101011111∧101001111111err0 110 01111110eof

9. Suppose a 128-kbps point-to-point link is set up between the Earth and a rover on Mars. The distance
from the Earth to Mars (when they are closest together) is approximately 55 Gm, and data travel over the
link at the speed of light 3 x 10^8 m/s.
(a) Calculate the minimum RTT for the link.
(b) Calculate the delay x bandwidth product for the link.
(c) A camera on the rover takes pictures of its surroundings and sends these to Earth. How quickly after a
picture is taken can it reach Mission Control on Earth? Assume that each image is 5 MB in size.
(a) Propagation delay on the link is (55 × 109)/(3 × 108) = 184 seconds. Thus, the RTT is 368
seconds.

(b) The delay × bandwidth product for the link is 184 × 128 × 103 = 2.81 MB.
(c) After a picture is taken, it must be transmitted on the link and be completely propagated before Mission
Control
can interpret it. Transmit delay for 5 MB of data is 41,943,040 bits/128 × 103 = 328 seconds. Thus, the total time

required is transmit delay + propagation delay = 328 + 184 = 512 seconds.

10. How Spanning Tree Algorithm handles loops?


The main idea of the spanning tree is for the switches to select the ports over which they will forward
frames. Each switch has a unique identifier. The switches have to exchange configuration messages with each
other
and then decide whether or not they are the root or a designated switch based on these messages. The configuration
messages contain three pieces of information:
1. The ID for the switch that is sending the message.
2. The ID for what the sending switch believes to be the root switch.
3. The distance, measured in hops, from the sending switch to the root switch.
Each switch records the current best configuration message it has seen on each of its ports (“best” is
defined below), including both messages it has received from other switches and messages that it has itself
transmitted.
Initially, each switch thinks it is the root, and so it sends a configuration message out on each of its ports
identifying itself as the root and giving a distance to the root of 0. Upon receiving a configuration message over a
particular port, the switch checks to see if that new message is better than the current best configuration message
recorded for that port. The new configuration message is considered better than the currently recorded information if
any of the following is true:
● It identifies a root with a smaller ID.
● It identifies a root with an equal ID but with a shorter distance.
● The root ID and distance are equal, but the sending switch has a smaller ID
If the new message is better than the currently recorded information, the switch discards the old
information and saves the new information. However, it first adds 1 to the distance-to-root field since the switch is
one hop farther away from the root than the switch that sent the message.
When a switch receives a configuration message indicating that it is not the root—that is, a message from a
switch with a smaller ID—the switch stops generating configuration messages on its own and instead only forwards
configuration messages from other switches, after first adding 1 to the distance field. Likewise, when a switch
receives a configuration message that indicates it is not the designated switch for that port—that is, a message from
a switch that is closer to the root or equally far from the root but with a smaller ID—the switch stops sending
configuration messages over that port. Thus, when the system stabilizes, only the root switch is still generating
configuration messages, and the other switches are forwarding these messages only over ports for which they are the
designated switch. At this point, a spanning tree has been built, and all the switches are in agreement on which ports
are in use for the spanning tree. Only those ports may be used for forwarding data packets.
Even after the system has stabilized, the root switch continues to send configuration messages
periodically,
and the other switches continue to forward these messages as just described. Should a particular switch fail, the
downstream switches will not receive these configuration messages, and after waiting a specified period of time they
will once again claim to be the root, and the algorithm will kick in again to elect a new root and newly designated
switc
hes. One important thing to notice is that although the algorithm is able to reconfigure the spanning tree
whenever a switch fails, it is not able to forward frames over alternative paths for the sake of routing around a
congested switch.

11. Explain ARP and DHCP


protocols ARP
The goal of ARP is to enable each host on a network to build up a table of mappings between IP addresses
and link-level addresses. Since these mappings may change over time (e.g., because an Ethernet card in a host
breaks and is replaced by a new one with a new address), the entries are timed out periodically and removed. This
happens on the order of every 15 minutes. The set of mappings currently stored in a host is known as the ARP cache
or ARP table.
ARP takes advantage of the fact that many link-level network technologies, such as Ethernet, support
broadcast. If a host wants to send an IP datagram to a host (or router) that it knows to be on the same network
(i.e.,
the sending and receiving nodes have the same IP network number), it first checks for a mapping in the cache. If
no
mapping is found, it needs to invoke the Address Resolution Protocol over the network. It does this by
broadcasting
an ARP query onto the network. This query contains the IP address in question (the target IP address). Each host
receives the query and checks to see if it matches its IP address. If it does match, the host sends a response message
that contains its link-layer address back to the originator of the query. The originator adds the information contained
in this response to its ARP table.
The query message also includes the IP address and link-layer address of the sending host. Thus, when a
host broadcasts a query message, each host on the network can learn the sender’s link-level and IP addresses and
place that information in its ARP table. However, not every host adds this information to its ARP table. If the host
already has an entry for that host in its table, it “refreshes” this entry; that is, it resets the length of time until it
discards the entry. If that host is the target of the query, then it adds the information about the sender to its table,
even if it did not already have an entry for that host. This is because there is a good chance that the source host is
about to send it an application-level message, and it may eventually have to send a response or ACK back to the
source; it will need the source’s physical address to do this. If a host is not the target and does not already have an
entry for the source in its ARP table, then it does not add an entry for the source. This is because there is no reason
to believe that this host will ever need the source’s link-level address; there is no need to clutter its ARP table with
this information.

DHCP
DHCP relies on the existence of a DHCP server that is responsible for providing configuration information
to hosts. There is at least one DHCP server for an administrative domain. At the simplest level, the DHCP server can
function just as a centralized repository for host configuration information. Consider, for example, the problem of
administering addresses in the internetwork of a large company. DHCP saves the network administrators from
having to walk around to every host in the company with a list of addresses and network map in hand and
configuring each host manually. Instead, the configuration information for each host could be stored in the
DHCP
server and automatically retrieved by each host when it is booted or connected to the network. However, the
administrator would still pick the address that each host is to receive; he would just store that in the server. In this
model, the configuration information for each host is stored in a table that is indexed by some form of a unique
client identifier, typically the hardware address (e.g., the Ethernet address of its network adaptor).
More sophisticated use of DHCP saves the network administrator from even having to assign addresses
to
individual hosts. In this model, the DHCP server maintains a pool of available addresses that it hands out to hosts
on
demand. This considerably reduces the amount of configuration an administrator must do, since now it is only
necessary to allocate a range of IP addresses (all with the same network number) to each network.
Since the goal of DHCP is to minimize the amount of manual configuration required for a host to function,
it would rather defeat the purpose if each host had to be configured with the address of a DHCP server. Thus, the
first problem faced by DHCP is that of server discovery. To contact a DHCP server, a newly booted or attached
host
sends a DHCPDISCOVER message to a special IP address (255.255.255.255) that is an IP broadcast address. This
means it will be received by all hosts and routers on that network. (Routers do not forward such packets onto
other
networks, preventing broadcast to the entire Internet.) In the simplest case, one of these nodes is the DHCP server
for the network. The server would then reply to the host that generated the discovery message (all the other nodes
would ignore it). However, it is not really desirable to require one DHCP server on every network, because this
still
creates a potentially large number of servers that need to be correctly and consistently configured. Thus, DHCP uses
the concept of a relay agent. There is at least one relay agent on each network, and it is configured with just one
piece of information: the IP address of the DHCP server. When a relay agent receives a DHCPDISCOVER message,
it unicasts it to the DHCP server and awaits the response, which it will then send back to the requesting client.
In the case where DHCP dynamically assigns IP addresses to hosts, it is clear that hosts cannot keep
addresses indefinitely, as this would eventually cause the server to exhaust its address pool. At the same time, a
host
cannot be depended upon to give back its address, since it might have crashed, been unplugged from the network, or
been turned off. Thus, DHCP allows addresses to be leased for some period of time. Once the lease expires, the
server is free to return that address to its pool. A host with a leased address clearly needs to renew the lease
periodically if in fact it is still connected to the network and functioning correctly.

12. Explain the working of GMAIL w.r.t. OSI model models? (working of each layer)
At the very top of the OSI model is the application layer. The email client GUI would be a part of the
application layer. Below that, the presentation layer is concerned with the format of data exchanged between peers,
for example, converting the data of the email into a generic format. The session layer provides a namespace that is
used to tie together the potentially different transport streams that are part of a single application. For example, it
might keep the user logged in and verify the authenticity of the email’s sender. In the transport layer, a suitable
protocol (here TCP) is chosen for transmission and the message is passed onto the next layer. The network layer
handles routing among nodes within a packet-switched network. It will add sender IP and receiver IP address and
pass on the packet. The data link layer will break the packets into frames to be sent to the physical layer. The
physical layer transmits the raw bits over the communication link. The transport layer and higher layers run only on
the end hosts and not on the intermediate switches or routers. The lower layers are run on each intermediate
switch/router as well.

You might also like