You are on page 1of 10

TRAFFIC ENGINEERING

Introduction: Traffic engineering provides the basis for the analysis and design of telecommunication networks. It is not only the switching elements but also many other common shared subsystems in a telecommunication network that contribute to the blocking of a subscriber call. In a telephone networks, these include digit receives inter stage switching links, call processors and trunks between exchanges. The load or the traffic pattern on the network varies during the day with heavy traffic at certain times and low traffic at other times. This can be shown by the following figure:

Number of calls in the hour

10

13

16

19

23

Hour of the day


Figure: Typical Telephone Traffic Pattern on a Working Day

The task of designing cost effective networks that provide the required quality of service under varied traffic conditions demands a forma scientific basis. Such a basis is provided by traffic engineering or tele-traffic theory. Traffic engineering analysis enables one to determine the ability of a telecommunication network to carry a given traffic at a particular loss probability. It provides a mean to determine the quantum of common equipment required to provide a particular level of service for a given traffic pattern and volume. Network Traffic Parameters Busy Hour: Continuous 1 hour period lying wholly in the time interval concerned, for which the traffic volume or the number of call attempts is the greatest. Call Completion Rate (CCR): It is defined as the ratio of the number of successful calls to the number of calls attempts. Busy Hour Call Attempts (BHCA): It is the number of call attempts in the busy hour. This is the important parameter to deciding the processing capacity of a common control or a stored program control system of an exchange. The CCR parameter is used in dimensioning the network capacity. Networks are usually designed to provide an overall CCR of over 0.70. The traffic load on a given network may be on the local switching unit, interoffice trunk lines or other common subsystems. For analytical treatment, all the common subsystems of a telecommunication network are collectively termed as SERVERS or link or trunk. The traffic on the network may then be measured in terms of the occupancy of the servers in the network. Such a measure is called the traffic intensity which is defined as

Generally, the period of observation is taken as one hour.

It is called Erlang (E) to honor the Danish telecom engineer A.K. Erlang, who did pioneering of work in traffic engineering. Thus,

Traffic intensity (

) Erlang (E)

A server is said to have 1 Erlang of traffic if it occupied for the entire period of observation. We can calculate the traffic in two ways: One based on the traffic generated by the subscribers and the other based on the observation of busy servers in the network. It is possible that the load generated by the subscribers sometimes exceeds the network capacity. There are two ways in which this overload traffic may be handled: The overload traffic may be rejected without being serviced or held in a queue until network facilities become available. In the first case, the calls are lost and in the second case, the calls are delayed. Correspondingly, two types of systems called LOSS SYSTEMS and DELAY SYSTEMS are encountered. Conventional automatic telephone exchange behaves like loss systems. In data networks, circuit switched networks behave as loss systems whereas store and forward message or packet networks behave as delay systems. The basic performance parameters for loss systems are the grade of service and the blocking probability and for delay systems, the service delays. The traffic models used for studying loss terms known as blocking or congestion models and the ones used for studying delay systems are called queuing systems. Grade Of Service (GOS): This is defined as the ratio of lost traffic to offered traffic and is represented by the following relation:

Where, A= offered traffic i.e. the product of average number of calls generated by the users and the average holding time per call.

= carried traffic i.e. the actual traffic carried by the network and is the average occupancy of the servers in the network. A= lost traffic

The smaller the value of grade of service, the better is the service. The recommended value of GOS is generally 0.002 which means 2 calls in every 1000 calls may be lost. The blocking probability is defined as the probability that all the servers in a system are

busy. When all the servers in a system are busy, no further traffic can be carried by the system and the calling subscriber traffic is blocked. Generally, GOS is called as call congestion or loss probability and the blocking probability is called time congestion. At the first instance, it may appear that the blocking probability is the same measure as GOS. The probabilities that all the servers are being in congestion represent the fraction of the calls lost, which is what the GOS is all about. However, this is generally not true. For example, in a system with the equal number of servers and subscribers, the GOS is zero as there is always a server available to subscribers. On the other system, there is definite probability that all the servers are busy at the given instant and hence the blocking probability is non-zero. The fundamental difference is that the GOS is measure from the subscriber point of view whereas the blocking probability is a measure from the network or switching system point of view. In the case of delay systems, the traffic carried by the network is the same as the load offered to the network by the subscribers. Since the overload traffic has all the calls are put through the network as and when traffic network facilities become available, GOS of traffic is not meaningful in the case of delay system and has a value of zero always. DELAYS SYSTEMS: A class of telecommunication networks such as data networks places the call or message arrivals in the queue in the absence of resources and services them as and when the resources become available. Servicing is not taken up until the resources become available. Such systems are known as delay systems, waiting-call systems, lost call delayed (LCD) systems and queuing systems.

Delay systems are analyzed using queuing theory which is sometimes known as waiting line theory. Examples of delay systems in telecommunications include the following: Message Switching Packet Switching Digit receiver access Automatic call distribution Call processing

The elements of a queuing system are shown in figure below:


Server 1 Offered traffic ...... ...... ...... ...... Sources Server 2 . ..

Queue Server R

Figure: Elements of a queuing system

There is a large population of sources that generate traffic or service requests to the network. There is a service facility that contains a number of identical servers, each of which is capable of providing the desired service to request. When all the servers are busy, a request arriving at the network is placed in a queue until a server becomes unavailable. While analyzing queuing systems, we have to deal with a number of random variables such as the number of waiting requests, inter arrival times between requests, and time spent by a request in the system. The number of requests present in the system or the state of the system is given by the sum of the requests in the queue and those being serviced. No request can be pending in the queue unless all the servers are busy. Hence, we have

K=

+R

The mean time a call or request spends in the system is the sum of the mean wait time mean service or holding time

and

A queue operation enables better utilization of servers than does a loss system. Queuing has the effect of smoothening out the traffic flow as far as servers are concerned. Peaks in the arrival process build up the queue lengths. Since there is no statistical limit on the number of arrivals occurring in a short period of time, there is a need for infinite queuing capacity if there were to be no less of traffic. In a practical system, only finite queuing capacities are possible and hence there is a probability, however small may it be, of blocking in delay systems. Assuming that a delay system has infinite queue capacity in an operational sense, a necessary condition for its stable operation is as follows:

<1 Or, <1

If this condition is not satisfied, the queue length would become infinite sooner or later, and the system would never be able to clear traffic offered to it. A queuing system is characterized by a set of six parameters notation, due to D.G. Kindall, is used to represent different types of queuing systems. This notation uses letters to identify the parameters. The notation reads as A|B|C|K|M|Z. The parameter specifications are shown in the figure 6. The value of parameters K and M may be either a finite number or an infinite number. Queue discipline is the rule used for choosing the next customer to be serviced from the queue. Commonly used queue disciplines include first-come-first-served (FCFS), random selection and priority based selection.

Input Specification Service Time Distribution

G: General (No Assumptions) M: Purely Random G: General (No Assumptions) M: Negative exponential D: Constant or deterministic

Number of Servers

N: Finite Number

Number of Sources Queue Length

M: Finite Number : Infinite

L: Finite Length : Infinite length


FCFS (First Come First Served) Random Selection Priority based selection

Service Discipline

A | B | C | K | M | Z Figure 6: Queuing System Notation The parameters K, M and Z may be omitted from the queue specifications, in which case they assume some default values. For K and M, the default values are infinity, i.e. infinite queue, capacity and infinite sources respectively. The default queue discipline is FCFS. The parameter C is a non-zero positive finite number. The parameters A and B may assume any one of the value as shown in the figure 6. As an example, the queue specifications M|M|I means a queue system with purely random (Poisson Arrival) input, negative exponential, one server, infinite queue capacity, infinite sources and FCFS queue discipline. Another example, the queue specifications M|D|4 means a queue system with purely random input (Poisson Arrival), deterministic service time distribution, four servers, infinite queue capacity, and infinite sources and FCFS queue discipline. Exponential Service Times The simplest delay system to analyze is a system with random arrivals and negative exponential service times: M|M|N.

In the M|M|1 system, it is assumed that calls are serviced in the order of their arrival. The following analysis also assumes that the probability of an arrival is independent of the number of requests already in the queue (infinite sources). From the assumptions, the probability that a call experiences congestion and is therefore delayed was derived by Erlang:

Probability (delay) = P (>C) =


Where, N = the number of servers A = the offered load in Erlangs

equation (1)

B = the blocking probability for a lost calls cleared systems The probability of delay P (>C) is referred to as Erlangs second formula or the Erlang-C formula. For single server systems N=, the probability of delay reduces to P, which is simply the output utilization or traffic carried by the server. Thus, the probability of delay for a single-server system is also equal to the offered load (assuming <1) i.e.

Where,

equation (2)

= probability of delay in case of single server

= number of messages or call arrival per minute

= average service time i.e. minute / message or call.


The distribution of waiting times for random arrivals, random service times, and a FCFS service discipline is

P (>t) = P (>0)

equation (3)

Where, P (>0) = the probability of delay given in equation (1) = the average service time of the negative exponential service time distribution By integrating equation (3) over all time, the average waiting time for all arrivals can be determined as,

t=
Where,

..equation (4)

t = the expected delay for all arrivals


The average delay of only those arrivals that get delayed is commonly denoted as

=
EXAMPLE:

equation (5)

A message switching network is to be designed for 95% utilization of its transmission link. Assuming exponentially distributed message lengths and an arrival rate of 10 messages per minute. What is the average waiting time and what is the probability that the waiting time exceeds 5 minutes? SOLUTION: Assuming that the message switching network uses a single channel between each pair of nodes. Thus, there is a single server and a single queue for each transmission link. Given, P (>0) = 95/100 = 0.95 | N = 1, Offered load A = o.95 Erlangs

= 10 arrival messages per minute


Since, P (>0) =

Average service time

= 0.095 minutes

Therefore, Average waiting time (not including service time)

t=

( .

)( . .

= 1.805 minutes

Probability of the waiting time exceeding 5 minutes i.e. (t = 5 minutes)

P (>5) = P (>0)

= 0.95

= 0.068

Thus, 68% of the message experience queuing delays of more than 5 minutes.

You might also like