Professional Documents
Culture Documents
Where,
The arrivals to the first group may be random, the overflow process tends to select
groups of these arrivals and pass them on to the second trunk group. Thus instead of being
random the arrivals to the second group occur in bursts.
In finite-queue systems the arrivals getting blocked are those that would
otherwise experience long delays in a pure delay system. Thus an indication of the blocking
probability of a combined delay and loss system can be determined from the probability that
arrivals in pure delay systems experience delays in excess of some specified value.
11 MARKS
1.Discuss in detail the network blocking probability?
End-to-End Blocking Probabilities:
Generally, a connection through a large network involves a series of transmission
links, each one of which is selected from a set of alternatives. Thus an end-to-end blocking
probability analysis usually involves a composite of series and parallel probabilities. The
simplest procedure is identical to the blocking probability (matching loss) analyses for
switching networks, The blocking probability equation in Figure contains several
simplifying assumptions. First, the blocking probability (matching loss) of the switches is
not included. In a digital time division switch, matching loss can be low enough that it is
easily eliminated from the analysis. In other switches, however, the matching loss may not be
insignificant. When necessary, switch blocking is included in the analysis by considering it a
source of blocking in series with the associated trunk groups. When more than one route
passes through the same switch, as in node C of Figure , proper treatment of correlation
between matching losses is an additional complication. A conservative approach considers
the matching loss to be completely correlated.
In this case the matching loss is in series with the common link. On the other hand, an
optimistic analysis assumes that the matching losses are independent, which implies that they
are in series with the individual links. Figure depicts these two approaches for including the
matching loss of switch C into the end-to-end blocking probability equation of Figure . In
this case, the link from C to D is the common link.
A second simplifying assumption used in deriving the blocking probability equation
in Figure involves assuming independence for the blocking probabilities of the trunk groups.
Thus the composite blocking of two parallel routes is merely the product of the respective
probabilities. Similarly, independence implies that the blocking probability of two paths in
series is 1 minus the product of the respective availabilities. In actual practice individual
blocking probabilities are never completely
independent. This is particularly true when a large amount of traffic on one route results as
overflow from another route. Whenever the first route is busy, it is likely that more than the
average amount of overflow is being diverted to the second route. Thus an alternate route is
more likely to be busy when a primary route is busy.
Using the more general term queueing theory, we can apply the following analyses to a wide
variety of applications outside of telecommunications. Some of the more common
applications are data processing, supermarket check-out counters, aircraft landings,
inventory control, and various forms of service bureaus. These and many other applications
are considered in the field of operations research. The foundations of queuing theory,
however, rest on fundamental techniques developed by early telecommunications traffic
researchers. In fact, Erlang is credited with the first solution to the most basic type of delay
system. Examples of delay system analysis applications in telecommunications are message
switching, packet
switching, statistical time division multiplexing, multipoint data communications, automatic
call distribution, digit receiver access, signaling equipment usage, and call processing.
Furthermore, many PBXs have features allowing queued access to corporate tie lines or
WATS
lines. Thus some systems formerly operating as loss systems now operate as delay systems.
In general, a delay operation allows for greater utilization of servers (transmission
facilities) than does a loss system. Basically, the improved utilization is achieved because
peaks
in the arrival process are “smoothed” by the queue. In this case, however, overload traffic
isdelayed until call terminations produce available channels.
In most of the following analyses it is assumed that all traffic offered to the system
eventually gets serviced. One implication of this assumption is that the offered traffic
intensity
A is less than the number of servers N. Even when A is less than N, there are two cases in
which
the carried traffic might be less than the offered traffic. First, some sources might tire of
waiting
in a long queue and abandon the request. Second, the capacity for storing requests may be
finite. Hence requests may occasionally be rejected by the system.
A second assumption in the following analyses is that infinite sources exist. In a
delay system, there may be a finite number of sources in a physical sense but an infinite
number
of sources in an operational sense because each source may have an arbitrary number of
requests outstanding (e.g., a packet-switching node). There are instances in which a finite
source analysis is necessary, but not in the applications considered here.
An additional implication of servicing all offered traffic arises when infinite sources
exist. This implication is the need for infinite queuing capabilities. Even though the offered
traffic intensity is less than the number of servers, no statistical limit exists on the number of
arrivals occurring in a short period of time. Thus the queue of a purely lossless system must
be
arbitrarily long. In a practical sense, only finite queues can be realized, so either a statistical
chance of blocking is always present or all sources can be busy and not offer additional
traffic.
When analyzing delay systems, it is convenient to separate the total time that a re-
quest is in the system into the waiting time and the holding time. In delay systems analysis
the
holding time is more commonly referred to as the service time. In contrast to loss systems,
delay system performance is generally dependent on the distribution of service times and not
just the mean value tm. Two service time distributions are considered here: constant service
times and exponential service times. Respectively, these distributions represent the most
deterministic and the most random service times possible. Thus a system that operates with
some other distribution of service times performs somewhere between the performance
produced by these two distributions.