Professional Documents
Culture Documents
3.1
3.2
Example 3.48
3.3
Note
we use the term bandwidth in two contexts
▪bandwidth in hertz,
the range of frequencies in a composite signal or
the range of frequencies that a channel can pass.
▪bandwidth in bits per second,
the number of bit transmission in a channel or
link. Often referred as Capacity.
3.4
Throughput
■ Throughput refers to amount of data that can be transferred from one device to
another in a given amount of time
■ Throughput (bits/sec)=
((number of successful packets)*(average packet_size))/Total Time
Example 3.44
Solution
We can calculate the throughput as
3.6
Latency (Delay)
■ The latency or delay defines how long it takes for an
entire message to completely arrive at the destination,
from the time, the first bit is sent out from the source.
3.8
Propagation and Transmission Delay
3.9
Queuing Time
■ The time needed for each intermediate or end device to
hold the message before it can be processed.
■ The queuing time is not a fixed factor; it changes with the
load imposed on the network.
■ When there is heavy traffic on the network, the queuing
time increases.
■ An intermediate device, such as a router, queues the
arrived messages and processes them one by one.
■ If there are many messages, each message will have to
wait.
Processing time
■ In packet switching, processing delay is the time it
takes routers to process the packet header.
■ Processing delay is a key component in network delay
■ It has been ignored as it is insignificant in nodes
Example 3.45
Solution
We can calculate the propagation time as
Example 3.46
Solution
We can calculate the propagation and transmission time
3.12
Example 3.46 (continued)
3.13
Example 3.47
Solution
We can calculate the propagation and transmission
times as shown on the next slide.
3.14
Example 3.47 (continued)
Note
because the message is very long and the bandwidth is
not very high,
the dominant factor is the transmission time, not the
propagation time. The propagation time can be ignored.
3.15
3.16
Note
3.17
Jitter
■ Another performance issue that is related to delay is
Jitter.
■ It is a problem if different packets of data encounter
3.20
■ But what happens if the packets arrive with different delays?
■ For example, say the first packet arrives at 00:00:01 (l-s delay), the
second arrives at 00:00: 15 (5-s delay), and the third arrives at
00:00:27 (7-s delay). If the receiver starts playing the first packet at
00:00:01, it will finish at 00:00: 11. However, the next packet has
not yet arrived; it arrives 4 s later.
■ There is a gap between the first and second packets and between
the second and the third as the video is viewed at the remote site.
This phenomenon is called jitter.
■ The figure below shows the situation.
Timestamp
■ One solution to jitter is the use of a timestamp. If each packet has a
timestamp that shows the time it was produced relative to the first
(or previous) packet, then the receiver can add this time to the time
at which it starts the playback.
■ In other words, the receiver knows when each packet is to be
played.
■ Imagine the first packet in the previous example has a timestamp of
0, the second has a timestamp of 10, and the third has a timestamp
of 20.
■ If the receiver starts playing back the first packet at 00:00:08, the
second will be played at 00:00: 18 and the third at 00:00:28.
■ There are no gaps between the packets.
■ To prevent jitter, we can time-stamp the packets and
■ separate the arrival time from the playback time.
■ The figure below shows the situation.