You are on page 1of 1

will get queued at the relay node R since the outgoing link to D supports information transfer

at only 1 Mbps. The buffer at node R will overflow and packets will begin to be dropped at R
resulting in congestion. Although the packets are being dropped at R, it is common to say that
the links from S1 to R and S2 to R are congested because it is the packets arriving on these
links which are being dropped.

An obvious solution to the congestion in the above example is to employ flow control on the
sources S1 and S2 by asking the relay node R to throttle the amount of information they send
to it. But this method causes additional packets to be sent from R to the sources which is not
scalable as the number of sources increases. A simpler method which does not require the relay
node to transmit additional packets is the following. Each source starts a timer waiting for an
acknowledgement from the destination D for every successful packet reception. If the packets
are dropped at R, timeout events will occur at a source and it will infer that the network is
congested. It then reduces the rate at which it transmits information. If both the sources reduce
their individual rates of transmission, the congestion will reduce and eventually disappear.

Congestion is considered a serious problem in communication networks and sources often reduce
their transmission rates aggressively in order to prevent it as quickly as possible. Once congestion
has been avoided, the source transmission rates may be much lower than what the network can
support without congestion. Such a situation is also undesirable as it increases the delay in
the information transfer from the sources to their respective destinations. The solution is to
continue to reduce source transmission rates aggressively when timeouts occur but increase the
rates conservatively if acknowledgements arrive from the destination. Such a congestion control
scheme guarantees that congestion is prevented quickly and that the source transmission rates
are slowly brought up to levels where they do not result in congestion.

This scheme helps the sources recover from congestion once it occurs but it needs some packets
to be dropped before it can detect congestion. An alternative is to use schemes which predict
the ocurrence of congestion and take steps to avoid it before it occurs. Such schemes are called
congestion avoidance schemes. A simple example is random early detection (RED) where the
relay nodes monitor the amount of free buffer space and when they find that it is about to be
exhausted they randomly drop a packet which was sent to them. This causes the source of the
packet to timeout while waiting for the acknowledgement corresponding to that packet from the
destination and eventually reduce its transmission rate. If there are other sources which can still
cause congestion at the relay node, their packets will be populating a significant percentage of
the buffer and a random drop will pick their packet with high probability. In this way, all the
sources can possibly cause congestion will be eventually throttled.

In this course, we will discuss flow control, congestion control and congestion avoidance schemes
which are either used or proposed for use on the Internet.

1.2.6 Security

Network security is the field addressing the problem of detecting, preventing or recovering from
any action which compromises the security of the information being transferred in a communi-
cation network. The actions which constitute a security compromise can be one of the following:

• Eavesdropping: The users of a communication network often wish to communicate sensitive


information which needs to be kept secret from unauthorized users who are also called
adversaries. Eavesdropping is said to occur if an adversary can intercept and understand

12

You might also like