You are on page 1of 25

UNIT 2

1.Discuss the OSI reference model in detail, explaining the various layers of
communication.
The OSI (Open Systems Interconnection) model is a conceptual framework used
to understand network communication. It divides network communication into
seven distinct layers, each responsible for specific tasks. These layers facilitate
communication between different systems regardless of their underlying
architecture. Let's explore each layer in detail:
1. Physical Layer:
This is the first and lowest layer of the OSI model. It deals with the physical
connection between devices. It defines the hardware elements of a network, such
as cables, switches, and network interface cards. The physical layer is concerned
with the transmission and reception of raw data bits over a physical medium, like
electrical or optical signals.
2. Data Link Layer:
The data link layer provides a reliable link between two directly connected nodes
in a network. It ensures that data is delivered error-free and in the correct order.
It also handles issues like flow control and error detection. This layer is divided
into two sub-layers:
Logical Link Control (LLC): Responsible for flow control and error checking.
Media Access Control (MAC): Deals with the unique hardware addresses (MAC
addresses) of devices on the network, enabling the identification of devices in a
network segment.
3. Network Layer:
The network layer is responsible for logical addressing and routing of data packets
between different networks. It determines the best path for data packets to
travel from the source to the destination across multiple networks. IP (Internet
Protocol) operates at this layer. Routers work at the network layer, making
decisions about the optimal path for data packets to reach their destination.
4. Transport Layer:
The transport layer ensures end-to-end communication, reliability, and data
integrity between devices on different hosts. It establishes, maintains, and
terminates connections between applications. Two main transport layer protocols
are TCP (Transmission Control Protocol), which ensures reliable data delivery, and
UDP (User Datagram Protocol), which is faster but does not guarantee delivery.
5. Session Layer:
The session layer establishes, maintains, and terminates communication sessions
between applications. It handles session setup, data exchange, and session
teardown. This layer is responsible for synchronization, dialog control, and
managing the flow of data between two devices.
6. Presentation Layer:
The presentation layer is responsible for data translation, compression,
encryption, and formatting. It ensures that data sent from the application layer of
one system can be read by the application layer of another system. This layer
deals with the syntax and semantics of the information exchanged between two
systems, ensuring that the data is understood at both ends.
7. Application Layer:
The application layer is the topmost layer of the OSI model. It provides network
services directly to end-users or applications. This layer enables communication
and interaction between different software applications. Examples of application
layer protocols include HTTP (Hypertext Transfer Protocol), FTP (File Transfer
Protocol), and SMTP (Simple Mail Transfer Protocol).
In summary, the OSI model provides a structured way to understand network
communication processes. Each layer has specific functions and protocols,
ensuring that data can be transmitted, received, and understood between
different devices and applications in a networked environment.
2. Why are protocols needed? What are the two interfaces provided by
protocols?
Protocols are essential in the field of networking and communication for several
reasons:
1. Interoperability:
Protocols define a set of rules and conventions that allow different systems and
devices to communicate with each other. They ensure that devices from different
manufacturers can work together seamlessly. Without protocols, devices might
not understand each other's signals or data formats.
2. Standardization:
Protocols provide standards that allow for uniformity in communication.
Standardized protocols ensure that everyone follows the same rules, making it
easier for different systems to understand and interpret data correctly. This
standardization is crucial for the global and diverse nature of modern networks.
3. Error Handling and Data Integrity:
Protocols often include mechanisms for error detection, correction, and data
integrity. These features ensure that data is transmitted accurately and reliably.
Error handling protocols help in retransmitting lost or corrupted data, maintaining
the integrity of the communication process.
4. Security:
Many protocols include security features, such as encryption and authentication,
which are crucial for protecting sensitive data from unauthorized access and
tampering during transmission.
5. Efficiency:
Protocols are designed to optimize data transmission and network efficiency.
They define efficient ways of encapsulating data, routing packets, managing
network resources, and handling congestion, ensuring that networks operate as
smoothly and quickly as possible.
Two Interfaces Provided by Protocols:
Protocols provide two fundamental types of interfaces:
1. Service Interface:
Service interface refers to the set of rules and protocols that define how a service
provided by a network should be accessed and used. It specifies what functions or
operations are provided by the network and how applications can request these
services. For example, HTTP (Hypertext Transfer Protocol) provides a service
interface for retrieving web pages over the internet.
2. Peer-to-Peer Interface:
Peer-to-peer interface defines the rules that govern the exchange of data
between two devices or systems at the same layer of the OSI model. This
interface ensures that two devices understand each other's signals and data
formats. Protocols at this level enable communication between devices on the
same network. For instance, TCP (Transmission Control Protocol) provides a peer-
to-peer interface for reliable, ordered, and error-checked delivery of data
between devices.

3. What are the features provided by layering? Group the OSI layers by function?
Layering in networking provides several important features, including modularity,
ease of understanding, interoperability, and easier troubleshooting and
maintenance. By dividing the complex task of network communication into
smaller, manageable parts (layers), the OSI model achieves these features. Here's
how the OSI layers are grouped by function:
**1. Physical and Data Link Layers:
Function: Basic Network Connectivity
Physical Layer (Layer 1): Deals with the physical connection between devices. It
defines the hardware elements of a network, such as cables, switches, and
network interface cards.
Data Link Layer (Layer 2): Provides error-free transmission over the physical
layer. It handles logical addressing (MAC addresses) and ensures reliable data
transfer between directly connected nodes.
**2. Network Layer:
Function: Logical Addressing and Routing
Network Layer (Layer 3): Responsible for logical addressing and routing of data
packets between different networks. It determines the best path for data packets
to travel from the source to the destination across multiple networks.
**3. Transport Layer:
Function: End-to-End Communication
Transport Layer (Layer 4): Ensures end-to-end communication, reliability, and
data integrity between devices on different hosts. It establishes, maintains, and
terminates connections between applications.
**4. Session, Presentation, and Application Layers:
Function: Application-Level Interaction
Session Layer (Layer 5): Establishes, maintains, and terminates communication
sessions between applications. It handles session setup, data exchange, and
session teardown.
Presentation Layer (Layer 6): Responsible for data translation, compression,
encryption, and formatting. It ensures that data sent from the application layer of
one system can be read by the application layer of another system.
Application Layer (Layer 7): Provides network services directly to end-users or
applications. This layer enables communication and interaction between different
software applications.
Grouping the OSI layers by function allows for a clear understanding of their roles
in network communication. Each layer performs specific tasks, and the modular
design enables flexibility and interoperability in networking technologies.
Additionally, this layered approach simplifies the process of developing and
troubleshooting network protocols and applications, making the entire system
more manageable and scalable.

4. Distinguish between connectionless and connection oriented communication


service stressing the
advantages and disadvantages of each.
Connectionless Communication:
Advantages:
Simplicity: Connectionless communication is simpler to implement as it doesn't
require establishing a connection before sending data.
Efficiency: It is more efficient for small, sporadic communications because there is
no overhead of connection establishment and teardown.
Scalability: Connectionless protocols, like UDP, are often used in scenarios where
a high volume of data needs to be broadcasted to multiple recipients, making
them scalable for such purposes.
Low Latency: Due to the lack of connection setup, data can be sent almost
immediately, leading to low latency transmissions.
Disadvantages:
Lack of Reliability: Connectionless communication does not guarantee that data
will be delivered, so there's a risk of data loss.
No Flow Control: There is no mechanism to prevent fast senders from
overwhelming slow receivers, potentially causing data congestion and loss.
No Error Recovery: In connectionless protocols, there is no built-in way to
recover lost or corrupted data.
Connection-Oriented Communication:
Advantages:
Reliability: Connection-oriented communication, such as TCP, ensures reliable
data delivery. It acknowledges received packets and retransmits if necessary,
guaranteeing that data is delivered correctly.
Flow Control: Connection-oriented protocols implement flow control
mechanisms to prevent overwhelming the receiver, ensuring smooth and efficient
data transfer.
Error Recovery: If data is lost or corrupted during transmission, connection-
oriented protocols can retransmit the lost packets, ensuring data integrity.
Ordered Delivery: Data is delivered in the order it was sent, which is crucial for
applications where the order of messages matters.
Disadvantages:
Complexity: Setting up and tearing down connections require additional
overhead, making connection-oriented communication more complex to
implement.
Latency: Due to the connection establishment process, there is an initial delay
before data transmission can begin, leading to higher latency compared to
connectionless communication.
Resource Intensive: Connection-oriented communication can be more resource-
intensive, especially in terms of memory and processing power, due to the need
to maintain connection state information.
In summary, the choice between connectionless and connection-oriented
communication depends on the specific requirements of the application.
Connectionless communication offers simplicity and efficiency, making it suitable
for applications where occasional data loss is acceptable, such as real-time
multimedia streaming. On the other hand, connection-oriented communication
provides reliability and ensures data integrity, making it suitable for applications
where accurate and ordered data delivery is critical, such as file transfers and web
transactions.
5. Two networks both provide reliable connection oriented service. One of them
offers a reliable bytestream whereas the other offers a reliable message stream.
Are these identical? If not, why is the distinction made. If not, give an example
where these are different.
No, a reliable byte stream and a reliable message stream are not identical, and
the distinction between them is important in networking. The difference lies in
how data is treated and delivered within the network:
Reliable Byte Stream:
In a reliable byte stream, data is treated as a continuous flow of bytes without any
specific message boundaries.
The sender can send a continuous stream of bytes, and the receiver reads these
bytes as they arrive, without necessarily knowing the boundaries of individual
messages.
TCP (Transmission Control Protocol) is a good example of a protocol that provides
a reliable byte stream. In TCP, data is transmitted as a stream of bytes, and both
sender and receiver manage the data as a continuous flow without considering
message boundaries.
Reliable Message Stream:
In a reliable message stream, data is treated as discrete, distinct messages. Each
message has its own boundaries and is transmitted individually.
The sender sends messages, and the receiver reads entire messages, not just
bytes. Messages are received and processed as separate entities.
Message-oriented protocols like MQTT (Message Queuing Telemetry Transport)
and AMQP (Advanced Message Queuing Protocol) provide reliable message
streams. These protocols are widely used in scenarios where messages need to be
processed as independent units.
Example of the Difference:
Consider a chat application where users send text messages to each other. In a
reliable byte stream protocol, the sender might continuously type and send
characters, and the receiver reads and displays the received characters in real-
time, treating the input as a continuous flow of bytes. The receiver doesn't wait
for complete messages but processes the incoming bytes as they arrive.
In contrast, with a reliable message stream protocol, each text message is treated
as a separate, distinct message. When the sender presses "send," the entire
message is transmitted as a unit, and the receiver processes and displays the
complete message. There are explicit message boundaries, and the receiver waits
for complete messages before processing them.
The choice between a reliable byte stream and a reliable message stream
depends on the requirements of the application. Some applications, like real-time
messaging systems, benefit from message-oriented protocols, as they provide
clear message boundaries and simplify processing. Other applications, especially
those involving continuous data streams like audio or video streaming, may prefer
byte-oriented protocols for their continuous, uninterrupted flow of data.

6. What is the difference between confirmed service and unconfirmed service?


Give an example of each.
Confirmed Service:
Confirmed services in networking refer to communication protocols or
mechanisms where the sender receives acknowledgment or confirmation from
the receiver after successfully delivering a message. In confirmed services, the
sender waits for an acknowledgment from the receiver to ensure that the
message has been received correctly. If an acknowledgment is not received within
a specified time, the sender may retransmit the message to ensure its delivery.
Example of Confirmed Service: Email Delivery Receipts
When you send an email and receive a delivery receipt, it represents a confirmed
service. The email server sends a delivery receipt back to the sender to confirm
that the email has been successfully delivered to the recipient's mailbox. If the
sender doesn't receive the delivery receipt within a reasonable timeframe, they
might resend the email to ensure its delivery.
Unconfirmed Service:
Unconfirmed services, on the other hand, do not provide acknowledgment or
confirmation of message delivery. In these services, the sender does not wait for
any acknowledgment from the receiver after sending the message. Once the
message is sent, the sender does not receive any feedback, and there is no way to
confirm whether the message has been received or not.
Example of Unconfirmed Service: UDP (User Datagram Protocol)
UDP is a transport layer protocol that provides an unconfirmed service. When
data is sent using UDP, the sender simply sends the data packets to the recipient
without waiting for any acknowledgment. There is no guarantee that the data will
reach the destination, and there is no mechanism for the sender to know if the
data has been successfully received.
In summary, confirmed services provide acknowledgment or confirmation of
message delivery, ensuring that the sender knows whether the message has been
successfully received. Unconfirmed services, on the other hand, do not provide
any acknowledgment, and once the message is sent, there is no way to confirm its
delivery. The choice between confirmed and unconfirmed services depends on
the specific requirements of the application and the importance of reliable
message delivery.

7. (a) What are the various kinds of delays in each link? Explain. (b) Describe the
various types ofmultiplexing used on a communication channel. Comment on
their properties.
7(a) Various Kinds of Delays in Communication Links:
In a communication network, several types of delays can occur in each link,
affecting the overall performance of data transmission. These delays include:
Propagation Delay:
Definition: Propagation delay is the time taken for a signal to travel from the
sender to the receiver.
Explanation: It depends on the distance between the sender and receiver and the
speed of the signal in the medium. Longer distances and slower mediums result in
higher propagation delays.
Transmission Delay:
Definition: Transmission delay is the time taken to push all the bits of a packet
into the link.
Explanation: It is determined by the size of the data packet and the bandwidth of
the link. Larger packets or lower bandwidth result in higher transmission delays.
Queuing Delay:
Definition: Queuing delay occurs when data packets have to wait in a queue
before they can be transmitted.
Explanation: It is influenced by the congestion level of the network. High traffic
and limited network resources can lead to longer queuing delays.
Processing Delay:
Definition: Processing delay is the time taken by the routers or switches to
process the packet header and make forwarding decisions.
Explanation: It depends on the complexity of the routing algorithms and the load
on the network devices. More complex algorithms or high loads result in higher
processing delays.
7(b) Various Types of Multiplexing:
Multiplexing is a technique used to combine multiple signals or data streams into
a single channel, allowing more efficient use of the communication medium.
There are several types of multiplexing techniques, each with unique properties:
Frequency Division Multiplexing (FDM):
Description: FDM divides the available bandwidth into multiple frequency bands.
Each signal occupies a different frequency band.
Properties: FDM is suitable for analog signals. Each signal can use its frequency
range, providing simultaneous transmission without interfering with each other.
Time Division Multiplexing (TDM):
Description: TDM divides the transmission time into fixed time slots. Different
signals are transmitted in different time slots.
Properties: TDM is efficient for digital signals. It ensures equal time for each
signal, suitable for voice and data applications.
Statistical Time Division Multiplexing (STDM):
Description: STDM dynamically allocates time slots based on demand. Time slots
are assigned based on the data rate of individual signals.
Properties: STDM adapts to varying data rates, providing flexibility and efficient
use of bandwidth. It's suitable for bursty data transmission.
Code Division Multiplexing (CDM):
Description: CDM assigns a unique code to each signal. Signals are spread over
the entire bandwidth using unique codes.
Properties: CDM is robust against interference and allows multiple signals to
share the same frequency spectrum without colliding. It's commonly used in
CDMA (Code Division Multiple Access) networks.
Each type of multiplexing has its advantages and is suitable for specific
applications. The choice of multiplexing technique depends on factors such as the
type of signals, bandwidth requirements, and the characteristics of the
communication medium.

8. What are the various parameters which are described to characterise queuing
systems. Explain the physical significance of each of these parameters.
Queuing systems are essential components in computer networks and various
other fields where entities (such as data packets, customers, or tasks) wait in line
for service. Several parameters are used to characterize queuing systems, each
providing valuable insights into the system's performance. Here are some key
parameters and their physical significance:
**1. Arrival Rate (λ):
Physical Significance: Arrival rate represents the rate at which entities arrive at
the queue to be serviced.
Importance: A higher arrival rate indicates more entities arriving at the queue,
which might lead to congestion and longer waiting times if the service rate is
lower than the arrival rate.
**2. Service Rate (μ):
Physical Significance: Service rate represents the rate at which entities are
serviced or processed and removed from the queue.
Importance: A higher service rate indicates faster processing of entities, reducing
the waiting time in the queue. If the service rate is lower than the arrival rate, a
backlog of entities will form.
**3. Utilization Factor (ρ):
Physical Significance: Utilization factor represents the fraction of time the server
is busy.
Importance: A high utilization factor (ρ close to 1) indicates that the server is
almost always busy, potentially leading to longer queues and increased waiting
times.
**4. Queue Length (L):
Physical Significance: Queue length represents the number of entities waiting in
the queue at a specific moment.
Importance: Monitoring queue length helps assess the congestion level. Longer
queues suggest high demand or insufficient service capacity.
**5. Queueing Delay (W):
Physical Significance: Queueing delay represents the average time an entity
spends waiting in the queue before being serviced.
Importance: Longer queueing delays indicate that entities are spending more
time waiting, potentially affecting user experience and system performance.
**6. Number of Servers (c):
Physical Significance: Number of servers represents the count of service points or
servers available to process entities.
Importance: Increasing the number of servers can reduce queueing delays,
especially when there are multiple queues and entities can be distributed across
servers.
**7. Service Discipline:
Physical Significance: Service discipline defines the rules determining which entity
is serviced next.
Importance: Different service disciplines (e.g., First-Come-First-Served, Priority-
based) affect the fairness and efficiency of the queueing system.
**8. Queue Capacity:
Physical Significance: Queue capacity represents the maximum number of
entities that the queue can hold at any given time.
Importance: Once the queue is full, new arriving entities might be dropped or
denied service, potentially leading to data loss or unsatisfied customers.
Understanding and analyzing these parameters in queuing systems are crucial for
optimizing resource allocation, improving efficiency, and providing better service
to users or clients. By monitoring and adjusting these parameters, system
administrators can enhance the overall performance and user satisfaction within
the queuing system.

9. State Little's theorem and give a graphical proof of the theorem. State clearly
the assumptions made.
Little's Theorem: Little's Theorem is a fundamental result in queueing theory that
relates the average number of entities in a queue (�L), the average arrival rate of
entities (�λ), and the average time an entity spends in the system (�W):
�=�×�L=λ×W
Where:
�L is the average number of entities in the system.
�λ is the average arrival rate of entities into the system.
�W is the average time an entity spends in the system.
Assumptions:
The system is in a steady state (i.e., it has reached a stable condition where the
arrival rate equals the departure rate).
The system is memoryless, meaning future events are independent of past events
(this is often the case in many real-world queuing systems).
The entities in the system do not leave the system before completing their service
(no abandonment).
Graphical Proof:
Consider a system where entities are arriving at an average rate of �λ entities
per unit of time. Let �L represent the average number of entities in the system,
and �W represent the average time an entity spends in the system.
The area under the curve �(�)L(t) (number of entities in the system at time �t)
from 00 to �W represents the total number of entities in the system on average
(�L).
The area under the curve �λ from 00 to �W represents the total number of
arrivals (�×�λ×W) during the average time an entity spends in the system
(�W).
Since the rate of arrival (�λ) times the average time an entity spends in the
system (�W) gives the total number of entities in the system (�L), Little's
Theorem holds true.
Graphically, this can be visualized as the area under the curve representing
entities in the system matching the area under the curve representing arrivals
during the average time entities spend in the system.
This graphical proof provides an intuitive understanding of Little's Theorem,
emphasizing the relationship between arrival rate, average number of entities,
and average time spent in the system in a queuing system under the specified
assumptions.

10. Why is Little's theorem of limited utility in the solution of practical design
problems in computer networks?
Little's Theorem is a powerful and fundamental concept in queueing theory,
providing a mathematical relationship between the average number of entities in
a system (�L), the average arrival rate of entities (�λ), and the average time an
entity spends in the system (�W). While it offers valuable insights into the
relationship between these parameters in a steady-state queuing system, its
direct application to practical design problems in computer networks is limited for
several reasons:
Simplistic Assumptions: Little's Theorem relies on certain assumptions, including
a steady state, memoryless arrivals, and no entity abandonment. Many real-world
computer networks do not meet these assumptions. Networks are dynamic, with
varying loads and changing patterns of use, making the steady-state assumption
often unrealistic.
Limited Scope: Little's Theorem provides a general relationship but lacks the
specificity needed for complex network design problems. It does not consider
factors such as varying service rates, priority systems, multiple queues, and
network topologies. Real-world networks are more intricate and require detailed
modeling to capture their behavior accurately.
Dynamic Nature of Networks: Computer networks are dynamic, with changing
traffic patterns, varying loads, and adaptive protocols. Little's Theorem does not
account for these dynamic fluctuations, making it insufficient for addressing the
evolving nature of network traffic and demands.
Doesn't Address Quality of Service (QoS) Requirements: Practical network design
often involves meeting specific quality of service requirements, such as low
latency, high throughput, and minimal packet loss. Little's Theorem does not
provide mechanisms for incorporating these QoS considerations into network
design solutions.
Lack of Detailed Behavior: While Little's Theorem offers an overall relationship, it
does not delve into the detailed behavior of individual entities within the system.
Real-world network design often requires understanding individual entity
behavior to optimize specific aspects of the network's performance.
Real-time Considerations: Many modern computer networks, especially those in
applications like online gaming, video conferencing, or autonomous vehicles,
require real-time responses. Little's Theorem does not account for real-time
constraints, making it unsuitable for applications where timely processing is
crucial.
In summary, while Little's Theorem provides a theoretical foundation and a useful
understanding of the relationship between key queueing parameters, its
limitations in capturing the complexity, dynamics, and specific requirements of
practical computer networks hinder its direct applicability in solving real-world
design problems. Network designers rely on more sophisticated modeling
techniques, simulations, and empirical data analysis to develop effective solutions
for modern computer networks.

11. For the M/M/1 queue derive the expressions for the following : (a) Mean
number of customers in the system (b) Total waiting time of the customers.
In the M/M/1 queue, entities arrive according to a Poisson process with an
average arrival rate �λ entities per time unit, and the service times are
exponentially distributed with an average service rate �μ entities per time unit.
The "M/M/1" notation indicates a single server (1 server) and exponentially
distributed inter-arrival times and service times.
a) Mean Number of Customers in the System (�L)
The mean number of customers in the system (�L) can be calculated using
Little's Law, which states that �=��L=λW, where �L is the average number of
entities in the system, �λ is the arrival rate, and �W is the average time an
entity spends in the system.
For the M/M/1 queue, the average number of entities in the system (�L) can be
calculated as:
�=��−�L=μ−λλ
Where:
�λ = Arrival rate (average number of entities arriving per time unit)
�μ = Service rate (average number of entities serviced per time unit)
b) Total Waiting Time of the Customers (��Wq)
The total waiting time of the customers in the queue (��Wq) can be calculated
using Little's Law as well. Little's Law states that �=���L=λWq, where �L is
the average number of entities in the system, �λ is the arrival rate, and ��Wq
is the average time an entity spends waiting in the queue before being serviced.
For the M/M/1 queue, the average waiting time in the queue (��Wq) can be
calculated as:
��=1�−�Wq=μ−λ1
Where:
�λ = Arrival rate (average number of entities arriving per time unit)
�μ = Service rate (average number of entities serviced per time unit)
In summary, for the M/M/1 queue:
The mean number of customers in the system (�L) is given by �=��−�L=μ−λλ
.
The total waiting time of the customers in the queue (��Wq) is given by
��=1�−�Wq=μ−λ1.
12. Consider a terminal concentrator with 4 4800 bps input lines and one 9600
bps output line. The mean packet size is 1000 bits. Each of the four lines delivers
Poisson traffic with an average of 2 packets /second. What is the mean delay
experienced by the packet from the moment the last bit arrives at the
concentrator until the moment that bit is retransmitted on the output line? Also,
what is the mean number of packets in the concentrator, including the one in
service?

13. Two computers are connected by a 64 kbps line. There are eight parallel
sessions using the line. Each session generates Poisson traffic with a mean of 2
packets per second. The packets are exponentially distributed with a mean of 200
bits. The system designers must choose between giving each session a dedicated
8-kbps piece of bandwidth (via TDM or FDM) or having all the packets compete
for a single 64- kbps shared channel. Which alternative gives a better response
time?
14. A communication line is divided into two identical channels each of which will
serve a packet traffic stream where all the packets have equal transmission time T
and equal inter-arrival time R > T. Consider,alternatively, statistical multiplexing of
the two traffic steams by combining the two channels into a single channel with
transmission time T/2 for each packet. Show that the average system time of a
packet will be decreased from T to something between T/2 and 3T/4, while the
variance of waiting time in queue will be increased from 0 to as much as T2 /16.
15. Customers arrive at a fast-food restaurant at a rate of 5 per minute and wait
to receive their order for anaverage of 5 minutes. Customers eat in the restaurant
with a probability 0.5 and carry out their order without eating with a probability
0.5. A meal requires an average of 20 minutes. What is the average number of
customers in the restaurant?

16. Consider a network of transmission lines where packets arrive at n different


nodes with corresponding rates I i= 1 to n. If N is the average total number of
packets inside the network, then find the average delay per packet and average
delay per packet arriving at node i.

17. Consider a queuing system with K servers and with room for at most N >= K
customers. The system is always full. If average service time is X’ find the average
customer time in the system.
18. In Q17 above if the customers arrive at a rate but are blocked from the
system if they find it full. Find the blocking probability .

19. Consider a packet transmission system whose arrival rate (in packets/sec) is
increased from λ to kλ, where k>1 is some scalar factor. The packet distribution
remains the same but the transmission capacityis increased by a factor of k. What
is the effect on the following:(i) Average packet transmission time (ii)Utilization
factor (iii) Average number of packets in the system (iv) Average delay per packet.
Comment on the result.
Let's analyze the effects of increasing the arrival rate from �λ to ��kλ in a
packet transmission system where the transmission capacity is also increased by a
factor of �k.
(i) Average Packet Transmission Time:
The average packet transmission time is inversely proportional to the
transmission capacity. When the transmission capacity is increased by a factor of
�k, the average packet transmission time will be reduced by the same factor.
The average packet transmission time is given by:
�transmission=1transmission capacityTtransmission=transmission capacity1
When the transmission capacity increases by a factor of �k, the average packet
transmission time will decrease by a factor of �k.
(ii) Utilization Factor:
The utilization factor (�ρ) is the ratio of the arrival rate (�λ) to the transmission
capacity. When both the arrival rate and transmission capacity are increased by a
factor of �k, the utilization factor remains the same:
�=�transmission capacityρ=transmission capacityλ
Since both �λ and the transmission capacity are multiplied by �k, the ratio
�transmission capacitytransmission capacityλ remains constant.
(iii) Average Number of Packets in the System:
The average number of packets in the system (�L) is determined by Little's Law:
�=�×�L=λ×W, where �W is the average time a packet spends in the system. If
the arrival rate is increased to ��kλ, and the transmission capacity is increased
by a factor of �k, the average number of packets in the system will remain the
same, assuming the system operates under stable conditions. This is because the
increase in both arrival rate and transmission capacity cancels out in the
calculation of �L.
(iv) Average Delay per Packet:
The average delay per packet (�W) can be calculated as �=��W=λL. Since �L
remains constant (assuming stable conditions), and the arrival rate is increased to
��kλ, the average delay per packet will be reduced by a factor of �k.
Conclusion:
In summary, when both the arrival rate and transmission capacity in a packet
transmission system are increased by a factor of �k, the average packet
transmission time decreases by �k, the utilization factor remains constant, the
average number of packets in the system remains the same, and the average
delay per packet decreases by �k. These effects indicate that increasing both the
arrival rate and the transmission capacity proportionally by a factor of �k leads
to improved efficiency and reduced delays in the system.

20. What is meant by saying that exponential distribution has a memoryless


character? Prove it mathematically.
The memoryless property of the exponential distribution means that the
probability of an event occurring in the future is not influenced by past events. In
other words, the distribution does not "remember" the past. This property is
important in various real-life scenarios, especially in situations where events
occur randomly, such as in queueing systems or radioactive decay.
Mathematically, a random variable �X is memoryless if it satisfies the following
property for all �,�>0s,t>0:
�(�>�+�∣�>�)=�(�>�)P(X>s+t∣X>s)=P(X>t)
In words, this equation states that the probability that �X occurs after �+�s+t
time units, given that it hasn't occurred within the first �s time units, is equal to
the probability that �X occurs after �t time units. This property is often
expressed as:
�(�>�+�)P(X>s+t)=P(X>s)×P(X>t)
For the exponential distribution with rate parameter �λ, the probability density
function (PDF) is given by:
�(�)=��−��f(x)=λe−λx
Now, let's prove the memoryless property mathematically for the exponential
distribution:
�(�>�+�∣�>�)P(X>s+t∣X>s)
=�(�>�+� and �>�)�(�>�)=P(X>s)P(X>s+t and X>s)
=�(�>�+�)�(�>�)=P(X>s)P(X>s+t) =�−�(�+�)�−��=e−λse−λ(s+t)
=�−��=e−λt =�(�>�)=P(X>t)
Therefore, we have proved that for the exponential distribution, the memoryless
property holds. This property is a fundamental characteristic of exponential
random variables and finds applications in various fields, particularly in modeling
scenarios involving waiting times and reliability analysis.

21. “Mathematical Queuing models have a limited utility in analyzing practical


queuing systems and more often than not recourse is taken to systems simulation
methodologies”. Comment on the statement.

You might also like