You are on page 1of 12

CHANDIGARH UNIVERSITY

Assignment No:- 3
Subject :- Computer Network
Subject Code:- CAT-312
Date of submission:- 12-11-19

Name:-Sangeeta Faculty name:-Miss. Priyanka


Class:-BCA-5(B) UID:-17BCA1395
Q1:- How multimedia helps in application layer.
Ans:- multimedia networking applications. (Some authors refer to these
applications continuous-media applications.) Multimedia networking
applications are typically highly sensitive to delay; depending on the particular
multimedia networking application, packets that incur more than an x second
delay – where x can range from a 100 msecs to five seconds – are useless. On
the otherhand, multimedia networking applications are typically loss tolerant;
occassional loss only causes occassional glitches in the audio/video playback,
and often these losses can be partially or fully concealed. Thus, in terms of
service requirements, multimedia applications are diametrically opposite of
static-content applications: multimedia applications are delay sensitive and loss
tolerant whereas the static-content applications are delay tolerant and loss
intolerant.

The Internet carries a large variety of exciting multimedia applications. Below


we define three classes of multimedia applications.

Streaming stored audio and video: In this class of applications, clients request
on-demand compressed audio or video files, which are stored on servers. For
audio, these files can contain a professor's lectures, rock songs, symphonies,
archives of famous radio broadcasts, as well as historical archival recordings.
For video, these files can contain video of professors' lectures, full-length
movies, prerecorded television shows, documentaries, video archives of
historical events, video recordings of sporting events, cartoons and music video
clips.

One to many streaming of real-time audio and video: This class of applications
is similar to ordinary broadcast of radio and television, except the transmission
takes place over the Internet. These applications allow a user to receive a radio
or television transmission emitted from any corner of the world. (For example,
one of the authors of this book often listens to his favorite Philadelphia radio
stations from his home in France.).

Real-time interactive audio and video: This class of applications allows people
to use audio/video to communicate with each other in real-time. Real-time
interactive audio is often referred to as Internet phone, since, from the user's
perspective, it is similar to traditional circuit-switched telephone service.
Internet phone can potentially provide PBX, local and long-distance telephone
service at very low cost. It can also facilitate computer-telephone integration (so
called CTI), group real-time communication, directory services, caller
identification, caller filtering, etc.

Q2:- Elaborate various data link layer protocols in detail.

Ans:- The data link layer is concerned with local delivery of frames between
nodes on the same level of the network. Data-link frames, as these protocol data
units are called, do not cross the boundaries of a local area network. Inter-
network routing and global addressing are higher-layer functions, allowing data-
link protocols to focus on local delivery, addressing, and media arbitration. In
this way, the data link layer is analogous to a neighborhood traffic cop; it
endeavors to arbitrate between parties contending for access to a medium,
without concern for their ultimate destination. When devices attempt to use a
medium simultaneously, frame collisions occur. Data-link protocols specify
how devices detect and recover from such collisions, and may provide
mechanisms to reduce or prevent them.

Examples of data link protocols are Ethernet for local area networks (multi-
node), the Point-to-Point Protocol (PPP), HDLC and ADCCP for point-to-point
(dual-node) connections. In the Internet Protocol Suite (TCP/IP), the data link
layer functionality is contained within the link layer, the lowest layer of the
descriptive model, which also includes the functionality encompassed in the
OSI model's physical layer.

The data link layer has two sublayers: logical link control (LLC) and media
access control (MAC).[2]

Logical link control sublayer

The uppermost sublayer, LLC, multiplexes protocols running at the top of data
link layer, and optionally provides flow control, acknowledgment, and error
notification. The LLC provides addressing and control of the data link. It
specifies which mechanisms are to be used for addressing stations over the
transmission medium and for controlling the data exchanged between the
originator and recipient machines.

Media access control sublayer

MAC may refer to the sublayer that determines who is allowed to access the
media at any one time (e.g. CSMA/CD). Other times it refers to a frame
structure delivered based on MAC addresses inside.

There are generally two forms of media access control: distributed and
centralized.[3] Both of these may be compared to communication between
people. In a network made up of people speaking, i.e. a conversation, they will
each pause a random amount of time and then attempt to speak again,
effectively establishing a long and elaborate game of saying "no, you first".

The Media Access Control sublayer also determines where one frame of data
ends and the next one starts – frame synchronization. There are four means of
frame synchronization: time based, character counting, byte stuffing and bit
stuffing.

Q3:- Engrave various congestion control techniques.

Ans:- Congestion control refers to the techniques used to control or prevent


congestion. Congestion control techniques can be broadly classified into two
categories:

Open Loop Congestion Control

Open loop congestion control policies are applied to prevent congestion before
it happens. The congestion control is handled either by the source or the
destination.

Policies adopted by open loop congestion control –

Retransmission Policy : It is the policy in which retransmission of the packets


are taken care. If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted. This transmission may increase the congestion
in the network.

To prevent congestion, retransmission timers must be designed to prevent


congestion and also able to optimize efficiency.

Window Policy :

The type of window at the sender side may also affect the congestion. Several
packets in the Go-back-n window are resent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and making it worse.

Therefore, Selective repeat window should be adopted as it sends the specific


packet that may have been lost.

Discarding Policy :

A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive
package and also able to maintain the quality of a message.

In case of audio file transmission, routers can discard less sensitive packets to
prevent congestion and also maintain the quality of the audio file.

Acknowledgment Policy :

Since acknowledgement are also the part of the load in network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.

The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a
acknowledgment only if it has to sent a packet or a timer expires.

Admission Policy :

In admission policy a mechanism should be used to prevent congestion.


Switches in a flow should first check the resource requirement of a network
flow before transmitting it further. If there is a chance of a congestion or there is
a congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the
network.

Closed Loop Congestion Control

Closed loop congestion control technique is used to treat or alleviate congestion


after it happens. Several techniques are used by different protocols; some of
them are:

Backpressure :

Backpressure is a technique in which a congested node stop receiving packet


from upstream node. This may cause the upstream node or nodes to become
congested and rejects receiving data from above nodes. Backpressure is a node-
to-node congestion control technique that propagate in the opposite direction of
data flow. The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.

Backpressure

In above diagram the 3rd node is congested and stops receiving packets as a
result 2nd node may be get congested due to slowing down of the output data
flow. Similarly 1st node may get congested and informs the source to slow
down.

Choke Packet Technique :

Choke packet technique is applicable to both virtual networks as well as


datagram subnets. A choke packet is a packet sent by a node to the source to
inform it of congestion. Each router monitor its resources and the utilization at
each of its output lines. whenever the resource utilization exceeds the threshold
value which is set by the administrator, the router directly sends a choke packet
to the source giving it a feedback to reduce the traffic. The intermediate nodes
through which the packets has traveled are not warned about congestion.

choke packet

Implicit Signaling :

In implicit signaling, there is no communication between the congested nodes


and the source. The source guesses that there is congestion in a network. For
example when sender sends several packets and there is no acknowledgment for
a while, one assumption is that there is a congestion.

Explicit Signaling :

In explicit signaling, if a node experiences congestion it can explicitly sends a


packet to the source or destination to inform about congestion. The difference
between choke packet and explicit signaling is that the signal is included in the
packets that carry data rather than creating different packet as in case of choke
packet technique.

Explicit signaling can occur in either forward or backward direction.

Forward Signaling : In forward signaling signal is sent in the direction of the


congestion. The destination is warned about congestion. The reciever in this
case adopt policies to prevent further congestion.

Backward Signaling : In backward signaling signal is sent in the opposite


direction of the congestion. The source is warned about congestion and it needs
to slow down.

Q4:- How does Transmission Control protocol (TCP) achieve reliability?

Ans;- Network congestion may occur when a sender overflows the network
with too many packets. At the time of congestion, the network cannot handle
this traffic properly, which results in a degraded quality of service (QoS). The
typical symptoms of a congestion are: excessive packet delay, packet loss and
retransmission.

Insufficient link bandwidth, legacy network devices, greedy network


applications or poorly designed or configured network infrastructure are among
the common causes of congestion. For instance, a large number of hosts in a
LAN can cause a broadcast storm, which in its turn saturates the network and
increases the CPU load of hosts. On the Internet, traffic may be routed via the
shortest but not the optimal AS_PATH, with the bandwidth of links not being
taken into account. Legacy or outdated network device may represent a
bottleneck for packets, increasing the time that the packets spend waiting in
buffer. Greedy network applications or services, such as file sharing, video
streaming using UDP, etc., lacking TCP flow or congestion control mechanisms
can significantly contribute to congestion as well. The function of TCP
(Transmission Control Protocol) is to control the transfer of data so that it is
reliable. The main TCP features are connection management, reliability, flow
control and congestion control. Connection management includes connection
initialization (a 3-way handshake) and its termination. The source and
destination TCP ports are used for creating multiple virtual connections. A
reliable P2P transfer between hosts is achieved with the sequence numbers
(used for segments reordering) and retransmission. A retransmission of the TCP
segments occurs after a timeout, when the acknowledgement (ACK) is not
received by the sender or when there are three duplicate ACKs received (it is
called fast retransmission when a sender is not waiting until the timeout
expires). Flow control ensures that a sender does not overflow a receiving host.
The receiver informs a sender on how much data it can send without receiving
ACK from the receiver inside of the receiver’s ACK message. This option is
called the sliding window and it’s amount is defined in Bytes. Thanks to the
sliding window, a receiving host dynamically adjusts the amount of data which
can be received from the sender. The last TCP feature – congestion control
ensures that the sender does not overflow the network. Comparing to the flow
control technique where the flow control mechanism ensures that the source
host does not overflow the destination host, congestion control is more global. It
ensures that the capability of the routers along the path does not become
overflowed.

TCP Congestion Control techniques prevent congestion or help mitigate the


congestion after it occurs. Unlike the sliding window (rwnd) used in the flow
control mechanism and maintained by the receiver, TCP uses the congestion
window (cwnd) maintained by the sender. While rwnd is present in the TCP
header, cwnd is known only to a sender and is not sent over the links. Cwnd is
maintained for each TCP session and represents the maximum amount of data
that can be sent into the network without being acknowledged. In fact, different
variants of TCP use different approaches to calculate cwnd, based on the
amount of congestion on the link. For instance, the oldest TCP variant – the Old
Tahoe initially sets cnwd to one Maximum Segment Size (MSS). After each
ACK packet is received, the sender increases the cwnd size by one MSS. Cwnd
is exponentially increased, following the formula: cwnd = cwnd + MSS. This
phase is known as a “slow start” where the cnwd value is less than the ssthresh
value.
Note: MSS defines the amount of data that a receiver can receive in one
segment. MSS value is set inside a SYN packet.

Q5:- Construct the hamming code for data 0111001.

Ans:-

Suppose the data to be transmitted is 1011001, the bits will be placed as follows:

Determining the Parity bits –


1. R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position.
R1: bits 1, 3, 5, 7, 9, 11

To find the redundant bit R1, we check for even parity. Since the total number
of 1’s in all the bit positions corresponding to R1 is an even number the value of
R1 (parity bit’s value) = 0
2. R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11

To find the redundant bit R2, we check for even parity. Since the total number
of 1’s in all the bit positions corresponding to R2 is an odd number the value of
R2(parity bit’s value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7

To find the redundant bit R4, we check for even parity. Since the total number
of 1’s in all the bit positions corresponding to R4 is an odd number the value of
R4(parity bit’s value) = 1
4. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit.
R8: bit 8,9,10,11
To find the redundant bit R8, we check for even parity. Since the total number
of 1’s in all the bit positions corresponding to R8 is an even number the value of
R8(parity bit’s value)=0.

Thus, the data transferred is:

Error detection and correction –


Suppose in the above example the 6th bit is changed from 0 to 1 during data
transmission, then it gives new parity values in the binary number:

The bits give the binary number as 0110 whose decimal representation is 6. Thus,
the bit 6 contains an error. To correct the error the 6th bit is changed from 1 to 0.

You might also like