Professional Documents
Culture Documents
1
Contents
4.2.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2
4.4 Aloha . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6.3 Straight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8 Token-ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.2 Shuenet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2.3 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4.2 Throughput S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3
6.3 The limiting distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.7 Geo/Geo/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.9 Geo/Geo/1/B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.9.2 Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.10.1 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4
1 Network performance analysis
Figure 1:
A network is a system that oers telecommunication services to clients. Services can be calls, if is a classical
Figure 2:
Pi
Where X is an arrival, Xi are the interarrival times and ti = j=1 Xj are the arrival times.
A(t)
= {#time average of arrivals on [0, t]}.
t
From now on it's supposed that the time average exists and is called arrival rate
5
A(t) . arrrivals
lim =λ .
t→∞ t second
The same we did for the arrival process can be done for the departure process.
Figure 3:
Figure 4:
A networks is said to be steady-state (or at equilibrium) when the departure and arrival rate are balanced
P (t) A(t) arrrivals
lim = lim =λ .
t→∞ t t→∞ t second
The long-term average number of clients in a stable system Q̄ is equal to the long-term average arrival
rate, λ, multiplied by the average time a client spends in the system, D̄.
Q̄ = λD̄
Where algebraically the long-term average number of clients in the system is dened as
6
Figure 5:
ˆt
1
Q̄ = lim q(τ )dτ
t→∞ t
0
This law is the most general law valid for a steady-state network.
Figure 6:
7
1 client in the network
Ij (t) =
0 otherwise
Thus
Figure 7:
∞
X
Q(t) = Ij (t).
j=1
ˆt ˆt X
∞ ∞ ˆt
1 1 1X
Q̄ = lim Q(τ )dτ = lim Ij (τ )dτ = lim Ij (τ )dτ
t→∞ t t→∞ t t→∞ t
0 0 j=1 j=1 0
Thus writing everything with these considerations and remembering that the number of arrivals in [0,t] is yet
A(t)
1X
Q̄ = lim Dj
t→∞ t
j=1
that can be simplied using a simple trick and remembering from probability theory the Law of Large Numbers
2
1 Note that the sum is over all clients of the network (arrived and arriving). If they are not in the network at time t the value of the
indicator function will be zero.
2 Law of Large Numbers (LLN):
X1 + X2 + ... + Xn
lim = X̄
n→∞ n
if {Xi }are random variable weak correlated and with the same mean X̄ .
8
Figure 8:
A(t) A(t)
1 A(t) X A(t) 1 X
Q̄ = lim Dj = lim · Dj = λ · D̄
t→∞ t A(t) t→∞ t } A(t) j=1
j=1 | {z
| {z }
1. Knowing that at the barrier on average there are 180 cars, evaluate the average time spent by each car at the
barrier.
2. Knowing that the average time service of every toll is 20 seconds, evaluate the average number of busy tolls.
Generally every network can be represented as a queue, as depicted in the example. The system has a certain
number or servers (tolls in this case) that serve the queue of clients (cars in this case).
For instance, in the packet networks the servers are the channels and the packets are the clients (packet switch-
ing). In the telephone networks the clients are the incoming calls and the servers are the input output pairs (circuit
switching).
Solution point 1 In the highway toll barrier example applying the Little's law at the whole network we have
Q̄ 180
Q̄ = λD̄ ⇒ D̄ = = = 6 min
λ 30
So every car stays 6 minutes, on average, at the barrier. 20 seconds will be spent at the toll (to pay) and the
9
Figure 9:
Solution point 2 Little's law has to be applied at the the subnetwork of servers (tolls). If the network is steady-
state the arrival rate doesn't change but it changes the average service time S̄ = 20 sec. Hence, said N¯C the average
30
N̄C = λS̄ = · 20 = 10 busy tolls
60
By the fact that the number of tolls is 20, to have a stable network (a network capable to serve all the incoming
λS̄ < C
Where C is the the total number of servers in the network. λS̄ is called trac intensity and it's measured
in Earlang.
´
Note If λS̄ ≥ C it is not possible to reach a steady state regime (equilibrium), thus Q = limt→∞ 1
t Q(τ )dτ −→ ∞
and Little's law is not valid anymore.
Note arrivals
λS̄ = second ·(second) is the average number of arrivals during the average service time.
. N̄C λS̄
ρ= = .
C C
10
Moreover can be proved that the occupation fraction is the probability to nd a server busy at time t
In a work conserving
3 single server network can be found the probability of nd the server idle
Figure 10:
Solution point 1 Assume that the system is steady-state, thus the arrival and departure rate are the same.
Applying Little's
Q̄ 15
D̄ = = 3 = 50 mins
λ 10
Where D is the average time spent at the doctor's (studio and waiting room).
Solution point 2 Applying Little's at the server (the doctor) we obtain the trac intensity (Earlang)
ρ = λS̄
knowing that there is only one doctor and applying the stability condition
ρ = λS̄ < 1
1
S̄ < = 3.3̄ mins
λ
3 A work conserving network is a network that works always when there are clients to be served. Say a non-lazy network.
11
This is the maximum average visit time to have a stable system.
be crowded and full soon. There will be a time when the waiting room will be not able anymore to receive any
Figure 11:
A fraction of the clients will be lost (λp ) and a fraction will be served (λi ). The throughput is the rate of the
Q = λi D̄
Where D̄ is the delay time of the clients that enter in the network.
Thus the network's throughput is related to the arrival rate and to the lost probability as it follows
λi = λ(1 − pl )
follows
C
λi < = Satutation throughput
S̄
This is said saturation throughput and it's the maximum trac aordable by the network.
12
2 The arrival's process
To represent the arrival's process the most used tool is the Poisson process.
Figure 12:
In each slot can there can be 1 arrival, with probability p, or 0 arrivals, with probability 1 − p. Thus we avoid
This model well suit the scenario of independent arrivals (e.g. clients do not date and do not agree on the arrival
time).
Now it is reduced to zero the length of a time slot, ∆t → 0, hence the probability of an arrival would tend to
p
zero: p = ∆t · λ. The average number of arrivals per time unit is forced to remain equal:
∆t = λ. A process built
Figure 13:
Figure 14:
The number of arrivals on an interval of length T, A(T ), is a Poisson random variable with expected equal to
λT .
13
(λT )k
P {A(T ) = k} = e−λT k = 1, 2, 3..
k!
Figure 15:
Figure 16:
Proof P1 The probability of k success on an interval T divided in N time slots of length ∆t is binomial.
!
N
P {A(T = N ∆t) = k} = pk (a − p)N −k
k
N! N (N − 1)...(N − (k − 1)) k
= pk (1 − p)N −k = p (1 − p)N −k =
k!(N − k)! k!
T
As it's well known p = ∆t · λ. In this particular case ∆t = N , hence ∆t → 0 ⇒ N → ∞. Thus
k
N (N − 1)...(N − (k − 1)) λT λT N −k
= (1- ) =
k! N N
(λT )k
N N −1 N − (k − 1) λT N −k
= ... · · (1 − ) .
N N N k! N
Now evaluating the limN →∞ we obtain
Property 2
If A(T1 ) and A(T2 ) are the arrivals on two dierent and disjoint time intervals, hence A(T1 ) and A(T2 ) are inde-
14
Figure 17:
Figure 18:
In a Poisson process of rate λ, the interarrival times {Xi } are independent and negative exponential random
Proof P3 The interarrival times (expressed in slots) have a geometric distribution as it follows
P {S = 1} = (1 − p)i−1 p
That means that we have i−1 unsuccesses and at the end one success.
The interarrival time in second is the number of slots multiplied by the duration of a slot, X = S · ∆t.
With what stated above it's evaluable the cumulative distribution function of the interarrival times
x
FX (x) = P {X ≤ x} = 1 − P {X > x} = 1 − P {S < }=
∆t
x
∆t is treated as an integer (the integer number of consecutive intervals without an arrival), thus
x
= 1 − (1 − p) ∆t =
x
= 1 − (1 − ∆t · λ) ∆t .
x
lim 1 − (1 − ∆t · λ) ∆t = 1 − e−λx .
∆t→0
15
Figure 19:
Finally it's proved that FX (x) = 1 − e−λx is the cumulative distribution function of the interarrival times. This
(λt)n−1
ftn (t) = λe−λt t≥0
(n − 1)!
Proof P4 From the gure it's evident that {tn ≤ t} ≡ {A(t) ≥ n}. Thus
∞
X (λt)j
Ftn (t) = P {tn ≤t}=P{A(t)≥n} = e−λt
j=n
j!
d
because of A(t) ∼ P oisson(λt). Derive on
dt :
16
Figure 20:
Figure 21:
∞ ∞
X (λt)j X −λt (λt)j−1 (λt)n−1
ftn (t) = − λe−λt + λe = λe−λt
j=n
j! j=n
(j − 1)! (n − 1)!
Property 5
The sum of two independent Poisson processes is Poisson too. The rate of the sum it's the sum of the rates.
Proof P5 Proof is valid for the sum of n processes. The process sum is modeled as it follows
n
X
A(t) = Ai (t)
i=1
P {at least one arrival per slot} = P {arrival on A1 } + P {arrival on A2 } + ... + P {arrival on An }
17
Figure 22:
P {at least two arrival per slot} = P {arrival on A1 and A2 } + P {arrival on A1 and A3 } + ...
Thus applying the lim∆t→0 this last probability is negligible. So remains only
P {1 arrival} = ∆t · λ Pn
with λ = i=1 λi
P {0 arrivals} = 1 − ∆t · λ
independently a Passion's arrival in the channel 1 or in the channel 2. The distributions of the arrival processes are
18
Figure 23:
A1 (t) ∼ P oisson(p · λ)
∞
X
P {A1 (t) = n, A2 (t) = m} = P {A1 (t) = n, A2 (t) = m|A(t) = i} · P {A(t) = i}
i=0
and all the element is the sum is equal to 0, except for i = n + m, thus
(λt)m+n
= P {A1 (t) = n, A2 (t) = m|A(t) = m + n} · e−λt
(m + n)!
and trivially the rst probability is binomial
!
m+n (λt)m+n
= pn (1 − p)m · e−λt
n (m + n)!
the middle.
19
Figure 24:
Πk = P {Q(t) = k}
as the probability mass function of the system number Q (the number of clients in the system).
ΠAT
k = P {Q(t) = k|A(t, t + δt) = 1}
Πk ≡ ΠAT
k
Counterexample When we deal with group arrivals an arrival does not see the time-averages. The system seems
Figure 25:
Consider a sequence of events as in the picture. The interarrival times {Xi } are independent random variables
20
The process A(t) is called the renewal counting process. Every event it's called renewal. The terminology it's
related to the reliability theory where the interarrival times represent the device's lifetime.
The processes studied in the network performance are usually renewal process.
Figure 26:
Proof Consider a epoch t in between the n-th and the (n+1)-th renewal.
tn ≤ t < tn+1
tn t tn+1
≤ <
n A(t) n
n n+1
1X t 1X
Xi ≤ < Xi
n i=1 A(t) n i=1
As stated so far, applying the limit it's trivial to see that t → ∞ ⇒ n → ∞, thus remembering the law of large
numbers
t A(t) 1
X̄ ≤ < X̄ ⇒ →
A(t) t X̄
21
2.2.2 Example (single-server queue)
Figure 27:
Consider a single-server queue. The server works this way: it can have a job to work on (busy period) or it can
be jobless waiting for something to do (idle period). The life of a server is an alternate sequence of busy and idle
periods called cycles.
Figure 28:
Xi = Bi + Ii
Solution The beginning of every cycle is a renewal. The length of the cycle is what we call interarrival time (Xi )
´t PA(t) PA(t)
0
IB (τ )dτ 0 Bi A(t) 0 Bi 1
ρ = lim = lim = lim · = · B̄
t→∞ t t→∞ t t→∞ t A(t) X̄
where we used the renewal theorem and the law of large numbers.
22
Thus, knowing that the cycle time is X̄ = B̄ + I¯, the ρ can be written as
B̄
ρ=
B̄ + I¯
Note This proof and the proof of the Little's law are very close one to each other.
ρ = λS̄ Little
ρ = λ B̄ Activity f raction
B
1
where λB = B̄+I¯
. This justify the interpretation we give in the beginning of ρ as the utilization fraction (and
ρ
B̄ = · I¯
1−ρ
Figure 29:
Consider the renewal process A(t) with the sequence IID {Xi }. Set t and an observation epoch.
Dene
Y = XA(t)+1 livetime of the device f ound alive at t
E age of this device
R residual lif etime of this device
Thus Y = E + R.
ˆ ∞
´
yfX (y) y 2 fX (y)dy E[X 2 ] E[X]2 + V AR[X] V AR[X]
E[Y ] = y· dy = = = = E[X] + .
0 E[X] E[X] E[X] E[X] E[X]
V AR[X]
Because
E[X] ≥ 0, the lifetime of an object found alive at instant t is grater than the lifetime of a generic
object.
23
In other words it is more probable to sample an object with a long life.
E[Y ] E[X 2 ]
E[R] = E[E] = = .
2 2E[X]
Proof (partial) fY (y) is given, because it is derived from a dicult theorem (Blackwell theorem). fR and fE
can be proven knowing fY (y).
The idea behind the proof is the following.
Figure 30:
1
y 0<t<y
fR|Y (t) = fE|Y (t) =
0 t>y
Figure 31:
ˆ ∞ ˆ ∞
1 yfX (y) 1 1 − FX (t)
fR (t) = fR (t|Y = y)fY (y)dy = · dy = · P {X > t} =
0 t y E[X] E[X] E[X]
ˆ ∞ ˆ ∞
1 yfX (y) 1 1 − FX (t)
fE (t) = fE (t|Y = y)fY (y)dy = · dy = · P {X > t} =
0 t y E[X] E[X] E[X]
24
Note (Poisson arrivals) In the case of a poisson arrival the interarrival time is exponential. Thus
E[X 2 ] 2E[X]2
⇒ E[R] = = = E[X]
2E[X] 2E[X]
The residual time has the same expected of the interarrival time. It is like the process forget about the past
Poisson is said to be a memoryless process and the exponential is a memoryless random variable.
Note (exponential random variable) In the exponential random variable case it's easy to prove that the
It is proven that the geometrical random variable is the only memoryless discrete random variable.
Note (M/G/1) It's yet studied the behavior of a queue with a single server. Here some extras about a single-
• Memoryless interarrivals;
• 1 server.
If the arrivals are Poisson, the time that is in-between the last departure and the next arrival (called Idle time) is
1
I ∼ exp(λ) ⇒ I¯ = .
λ
From the results yet achieved
ρ ρ 1 S̄
B̄ = · I¯ = · = M/G/1
1−ρ 1−ρ λ 1−ρ
where ρ = λS̄ is the server utilization fraction.
Moreover, said NB the number of clients served in one busy period, the length of the busy period will be
NB
X
B= Sj
j=0
25
thus, by the expectation theorem (ET)
B̄ = N̄B S̄
1
N̄B = M/G/1.
1−ρ
26
3 The single-server queue (M/G/1)
Figure 32:
Under certain hypotheses, in a single server queue, is possible to correlate the average number of clients in the
system Q, the average waiting time D linked to the arrival rate λ and the service time.
Suppose that:
λS¯2
W̄ = , ρ = λS̄ (P ollaczek − Khinchin0 s f ormula).
2(1 − ρ)
From Pollaczek-Khinchin's and Little's
Proof Pollaczek-Khinchin is proven using a FIFO queue, though the formula holds for any policy.
Figure 33:
27
Look at a client arriving in the queue (test client). It arrives alone because of Poisson arrivals. When it arrives
Dene
N
X
W =R+ Sj seconds.
j=1
W̄ = R̄ + N̄ · S̄.
This result is valid for Poisson arrivals. PASTA property assure that W̄ and R̄ seen from the test client coincide
with the time averages.
N̄ = λ · W̄
thus
W̄ = R̄ + λS̄ · W̄
W̄ = R̄ + ρ · W̄
R̄
W̄ = .
1−ρ
Only remains to evaluate R̄.
The process r(t) = residual service time can be drawn as it follows
Figure 34:
ˆ t P (t)
1 1 X Si2
R̄ = lim r(τ )dτ ' lim
t→∞ t 0 t→∞ t 2
i=1
28
Si2
where P (t) = number of departures in [0, t], 2 is the area of the i-th triangle and the approximation is reason-
able because of the limt→∞ .
P (t)
P (t) 1 X Si2 S¯2
R̄ = lim · =λ·
t→∞ t P (t) i=1 2 2
as usual applying the denition of rate λ and the law of large numbers.
Finally
λS¯2
W̄ =
2(1 − ρ)
ρ(1 + CV2 )
Q̄ = ρ 1 +
2(1 − ρ)
Figure 35:
From which we understand that to minimize the number of packet in the queue (i.e. lowest queue occupation
and waiting time) we should use xed size packets. Indeed the service time is related to the size of the packets.
29
3.3 M/G/1 with vacations
The M/G/1 with vacations is a not work-conserving network.
At the end of a busy period the server takes a vacation of random duration V1 . At the end of it if the queue is
still empty it takes another vacation of random duration V2 , otherwise it starts to serve the clients in the buer.
And so forth.
λS¯2 V¯2
W̄ = + M/G/1 with vacations
2(1 − ρ) 2V̄
Proof The proof ows as in the case of no vacation. The dierence stays in the evaluation of the residual time.
Remember
R̄
W̄ = .
1−ρ
The residual time process in this case is the following
Figure 36:
Dene
P (t) = number of departures in [0, t]
.
L(t) = number of vacations in [0, t]
P (t) ρ
lim =λ=
t→∞ t S̄
similarly
L(t) 1−ρ
lim = .
t→∞ t V̄
Now everything is settled to evaluate the residual time. From the gure
ˆ
t P (t) L(t)
1 1 X Si2 X Vi2
R̄ = lim r(τ )dτ ' lim +
t→∞ t 0 t→∞ t 2 2
i=1 i=1
P (t) L(t)
1 X Si2 1 X Vi2
= lim + lim
t→∞ t 2 t→∞ t 2
i=1 i=1
30
P (t) L(t)
P (t) 1 X Si2 L(t) 1 X Vi2
= lim · + lim ·
t→∞ t P (t) i=1 2 t→∞ t L(t) i=1 2
S¯2 (1 − ρ) V¯2
=λ· + · .
2 V̄ 2
Hence, nally
λS¯2 V¯2
W̄ = +
2(1 − ρ) 2V̄
At the end of a busy period the server shut down. When a new client comes the server takes a stochastic warm
up time before starting working. For instance the copying machine is a system like this.
The rst client of a busy period seems to have a service time of T + S, that is dierent from the other clients.
R̄
W̄ = .
1−ρ
The residual time process, r(t), in this case is represented as the picture below
Figure 37:
Dene
P (t) = number of departures in [0, t]
.
L(t) = number of busy periods in [0, t]
31
P (t)
lim =λ
t→∞ t
and
L(t)
lim = λB .
t→∞ t
1
Where λB == B̄+I¯
is the busy periods arrival rate, as stated so far in the note of the single-server queue.
ˆ
t P (t) L(t)
1 X Si2 X Ti2
1
R̄ = lim r(τ )dτ = lim + Ti Si +
t→∞ t 0 t→∞ t 2 2
i=1 i=1
and as usual
P (t) L(t)
1 X Si2 T2
P (t) L(t) 1 X
= lim · + · Ti Si + i
t→∞ t P (t) i=1 2 t L(t) i=1 2
λS¯2 T¯2
= + λB S̄ T̄ + .
2 2
Where T̄ is the average warm up time. The only thing to discover here is λB .
Evaluation of λB As what stated so far in the note about the single server queue
1
λB =
B̄ + I¯
and
B̄
= P {Q(t) > 0} = 1 − πo
B̄ + I¯
where π0 = P {Q(t) = 0}.
Thus, always from the knowledge we have already on the M/M/1
ρ 1 − π0 ¯
B̄ = · I¯ = ·I
1−ρ π0
thus
I¯ I¯
1 1 π0
B̄ = − 1 I¯ ⇒ B̄ + I¯ = ⇒ = ⇒ λB = ¯ .
π0 π0 λB π0 I
1
Remembering that because of Poisson arrival it's valid ¯
I
= λ, hence
λB = π0 λ.
Evaluation of π0 The average eective service time is (by the total probability theorem):
32
That means that the average eective service time is S if the arrival nds the server busy or is S+T if the
arrival nds the server idle. Indeed, by PASTA property, π0 is the probability that any arrival nds the server idle.
ρ̂ = 1 − π0 = λ · S̄ˆ (2).
1 − π0 = λ · (S̄ + T̄ π0 )
1 − λS̄ 1−ρ
⇒ π0 = =
1 + λT̄ 1 + λT̄
So, nally
L(t) λ(1 − ρ)
lim = λB =
t→∞ t 1 + λT̄
Then is trivial to prove that
33
4 Shared medium networks: performance analysis
Figure 38:
Hp5 the ideal controller controls and coordinates perfectly the accesses to the channel (server). The network can
be modeled as a single server queue with an arrival rate N λ, an innite buer and a deterministic service
P
time. S= R , Dp .
Figure 39:
(N λ) · S¯2
D̄ideal = S̄ + W̄ = S̄ +
2(1 − ρ)
that's simply the Pollaczek-Khinchin applied at the queue with Nλ arrival rate.
34
In the case of an ideal controller the service time is deterministic
thus
ρ
D̄ideal = Dp 1 + [sec] (Ideal controller delay).
2(1 − ρ)
The normalized (with respect to packet time) delay time is
ρ
D̂ideal = 1 + [pcks] (Ideal controller normalized delay).
2(1 − ρ)
ρ
This physically means that the packet spend one packet-time at the server and
2(1−ρ) packet-times waiting in
the buer.
Saggragate
S= .
R
With the ideal controller all bits are useful then
Note The relation S=ρ is valid for any controller that avoids collisions (those which generates no useful bits).
S = ρ · Psuccess < ρ
where Psuccess is the probability that a packet is successfully transmitted (without collisions).
Note that every node has the right to its own time both if does have or does not have packets to be sent on the
shared medium.
4 In this case the medium is somehow divided in order to permit every client to access to it. Examples are TDMA (Time Division
Multiple Access) and FDMA (Frequency Division Multiple Access).
35
Figure 40:
Figure 41:
The networks are similar to a M/D/1 with deterministic service time S = TF . Thus the waiting time, as usual,
is
R̄
W̄ =
1−ρ
and, as usual, the residual time is what has to be evaluated.
ˆ ˆ TF2
t (periodicity)
1 1 2 TF M · Dp
R̄ = lim r(τ )dτ = r(τ )dτ = = =
t→∞ t 0 TF TF TF 2 2
36
Figure 42:
Figure 43:
R̄ M · DP
W̄ = =
1−ρ 2(1 − ρ)
and the average delay time
M · DP
D̄T DM A = Dp + W̄ = Dp + [sec] (delay with T DM A)
2(1 − ρ)
D̄T DM A M
D̂T DM A = =1+ (normalized delay with T DM A).
Dp 2(1 − ρ)
4.2.2 Throughput
M
TDMA policy avoids collision, hence is valid ST DM A = ρ ⇒ D̂T DM A = 1 + 2(1−T ) .
The performance seems to be almost ideal for high load (i.e. the saturation throughput is 1 for both ideal and
TDMA). On the other hand the performance is very poor for low load. This is because of the rigid allocation policy
adopted.
In any case the performance moves above the ideal controller performance.
37
Figure 44:
Figure 45:
In the Aloha protocol when a packet has to be transmitted its just transmitted on the channel. If there is a
collision the packet is transmitted after a stochastic amount time, said back-o.
re-transmissions: Λ > λ.
38
Figure 46:
4.4.1 Throughput
The load at the channel is ρ = ΛS̄ = ΛDp .
The normalized throughput is S = ρ · Psucc , where Psucc = {0 arrivals in 2Dp } as it follows
Figure 47:
SALOHA = ρe−2ρ .
Note (Slotted aloha) A special case of Aloha protocol is the Slotted aloha. The packets are sent only in slots
of dimension Dp . In this case the throughput achieve better performances, because of the vulnerability period is
just Dp .
Trivial is to nd that SSLOT T ED−ALOHA = ρe−ρ .
39
Figure 48:
Dp without collision
Dpacket = .
D +B with collision
p
Finally
B̄
D̂ALOHA = 1 + (e2ρ − 1)(1 + ) (normalized delay with Aloha).
Dp
It's possible to vary dynamically the back-o B in order to avoid the breakdown of S after the maximum
1
(S = ' 0.18). To do that is necessary to have a back-log estimation (the knowledge of clients that will try to
2e
send again, called also looser estimation).
delays for low load and low delays for high load. Vice versa the policies that oer a random access.
and λ/3 do a short turn (ST). They are considered fully separated queues.
The service time is dierent for cars that turns (D ) and cars that go straight (D/2).
40
Figure 49:
Short turns are served twice with respect to the other actions. n is the number of cars that are able to turn
when is green.
S = (n services every 4nD secs) = 4D
LT
SS = (2n services every 4nD secs) = 2D
S = (n services every 2nD secs) = 2D
ST
Thus the bottle neck of the system are the long turns. The stability condition is given by
λ 3 arrivals
· 4D < 1 ⇒ λM AX = .
3 4D sec
Note that the assumption ≈ M/D/1 is valid only for high loads, otherwise the service time is over-rated (this is
R̄
⇒ W̄ST = .
1 − λ3 S̄ST
The only unknown here is R̄. The residual time r(t) is the following
41
Figure 50:
The r(t) is periodic. In the rst two slots short turns are not served. In the last two slots the system is supposed
ˆ
(2nD)2 D2
1 2 + 2n 2 2n + 1
R̄ = r(t)dt = = D
T T 4nD 4
Hence
(2n + 1)D
W̄ST =
4 1 − λ3 2D
" #
2n + 1
⇒ D̄ST = W̄ST +D =D 1+ (1)
4 1 − λ3 2D
4.6.3 Straight
As so far
42
Figure 51:
hence
R̄
W¯S = .
1 − λ3 2D
In this case the residual time is the following
The r(t) is periodic. In the rst three slots short turns are not served. In the last slot the system is supposed
(3nD)2 D 2
+ 2n
2 2 2 9n 1 D
R̄ = = + ·
4nD 4 2 8
" #
1 (9n + 21 )
⇒ D̄S = D + (2)
2 8 1 − 23 λD
hence
R̄
W¯LT = .
1 − λ3 4D
In this case the residual time is the following
43
Figure 52:
Figure 53:
The r(t) is periodic. In the rst three slots short turns are not served. In the last slot the system is supposed
(3nD)2 2
+ n D2
2 9n D
R̄ = = +1 ·
4nD 4 8
" #
1 (9n + 1)
⇒ D̄LT =D + (3)
2 8 1 − 43 λD
4.6.5 Conclusions
D̂ST = D̄DST ' 1 + 4 1−2nλ 2D
( 3 )
D̄S
D̂S = D ' 2 + 8 1−9n2 λD
1
f or n 1
( 3 )
D̄ 9n
D ' 1 + 8(1− 4 λD )
D̂LT =
LT
44
Figure 54:
Figure 55:
4.7 Ethernet
Suppose that every node in a ethernet network has always packets to transmit (high load). The channel behaves in
time as it follows
For sake of simplicity the contention intervals are a sequence of contention slot. A contention slot has a xed size
of a round-trip time, 2τ 5 .
Ethernet has a shared medium. The access mechanism is supposed to be the following: if a node has to transmit
something on the channel it rst sample it. If it is free then the node tries to start the transmission with probability
q, otherwise no. The trials are repeated at each contention slot (back-o algorithm). When a node is the only one
trying to transmit in a contention slot the communication starts until the node ended to transmit its packet.
If there are N nodes, all trying contending the channel, the probability to transmit on the channel is
1 N −1
AM AX = (1 − ) .
N
1 N −1 1
Moreover AM AX is lower bounded. Indeed limN →∞ (1 − N) = e ' 0.36. Thus AM AX > 1e .
5 τ is the propagation delay between the two farthest node in the network.
45
Figure 56:
Figure 57:
Length of a contention interval The probability that a contention period is long j slots, with A = AM AX , is
P {I = j} = (1 − AM AX )j AM AX j = 0, 1, 2, ... (geometric).
(1 − AM AX )
I¯ = . e [slots]
AM AX
Length of a packet Usually packet size is variable. The average transmission time is
P̄
D̄p = [secs]
R
is trivial to be evaluated:
46
Figure 58:
Figure 59:
D̄p D̄p
SM AX = ¯ & (3)
D̄p + I D̄p + 2eτ
(2)
Note If 2τ D̄p ⇒ T → 0. Thus ethernet works well only if the packet duration is longer than the propagation
delay.
Figure 60:
Ethernet standard (IEEE 802.3) requires that the distance between two node at most 2.5 km. Moreover, the
64
R 1
TM & 64 64 = = 0.26.
R + Re
1+e
(3)
47
Transmitting with frames of 1024 byte (usual dimension) the throughput is
1024
R
TM & 1024 64 = 0.85.
R + Re
(3)
That is the typical ethernet saturation throughput. Every node can transmit at 10 M bps, but the useful bits
4.8 Token-ring
Figure 61:
The network is a ring with a direction. In the gure every node has the right to use the network after its left
neighbor and before its right neighbor. The cyclic mechanism is realized using a token.
Said l the distance between two nodes, L the total length of the ring, v the propagation speed and D̄p the
C̄ = D̄p + L + l
v v
token recircle token passage
D̄p
TM = (L+l)
.
D̄p + v
48
Figure 62:
Figure 63:
Service times {Si } are the same for every node and they are independent from arrivals. S̄ , S¯2 are known.
Nodes transmit the rst packet of the queue when asked (this is called limited service).
There is a stochastic reservation interval between two consecutive interrogations. Reservation intervals {Vi } are
ρ = N λS̄
that is the utilization fraction of the server (fraction of time in which the server is busy).
The server can be seen as a single queue. The average waiting time W̄ of a packet can be evaluated with a test
packet arriving on a generic queue (say the rst one, because they are all equals).
• the residual time of the client served when the test packet arrives (R̄);
• the service times of the NF packets arrived before the test packed;
49
Figure 64:
W̄ = R̄ + N̄F S̄ + Ȳ (1)
PL
where Y = i=1 Vi .
interval.
Evaluation of N̄F Little's to the system gives the average number of clients in the system
ρ
N̄F = N λ · W̄ = W̄ .
S̄
Thus, by equation (1)
W̄ (1 − ρ) = R̄ + Ȳ (3)
where ρ = λN S̄ .
50
Figure 65:
Figure 66:
Figure 67:
Evaluation of Ȳ
L
X
Y = Vi → Ȳ = L̄ · V̄
i=1
stoch. sum
ρ
The test packet arrives with uniform probability on the timeline. With probability
N it arrives during the
1−ρ
transmission of a node. With probability
N it arrives during the reservation of a node. Thus with probability
ρ 1−ρ 1
N + N = N it arrives during a reserved slot of a node.
N̄F
Let's say NF1 the number of clients before the test. Because of the queues are all the same, N̄F1 = N .
If the test client arrives with its queue empty (NF1 = 0), then
1
If it arrives during N-th slot 1
N
1
If it arrives during (N-1)-th slot 2
N
... ... ...
ρ
If it arrives during 1-st slot, service time N
N
1−ρ
If it arrives during 1-st slot, reservation time 0
N
Therefore
1 1 ρ 1−ρ 1 (N − 1)N N −1
E[L|NF1 = 0] = 1 · +2· + ... + 1 · +0· = +ρ= +ρ
N N N N N 2 2
TP
51
Figure 68:
Figure 69:
If the test client arrives when its queue is not empty it sees NF1 extra cycles. Thus NF1 · N reservation intervals.
Hence
N −1
E[L|NF1 > 0] = + ρ + E[NF1 · N |NF1 > 0]
2
Finally
N −1
L̄ = E[L] = + ρ [P {NF1 > 0} + P {NF1 = 0}] + N · E[NF1 |NF1 > 0]P {NF1 > 0}
2
TP
N −1
L̄ = + ρ + N̄F (4)
2
Collecting all the results (1)-(4)
S̄ρ(1 + CV2S ) 1−ρ 2
N −1
W̄ (1 − ρ) = + V̄ (1 + CVV ) + V̄ + ρ + N̄F
2 2
| {z 2 } | {z }
R̄ L̄
52
S̄ρ(1 + CV2S ) + V̄ N + ρ + (1 − ρ)CV2V
W̄ =
2 1 − ρ 1 + V̄S̄
Considering the throughput T = ρ, fraction of tim in which packets are sent and the delay of the polling system
D̄polling = W̄ + S̄ . Thus
Thus
1
ρ+ SM AX − 1 (N + S)
D̂polling = 1 +
2 1 − SMTAX
Dp
with SM AX = Dp +V .
1 N V N
Thus S → 0 ⇒ D̂polling → 1 + SM AX −1 2 =1+ Dp 2.
Figure 70:
Note that if V Dp the polling system is really close to the ideal controller. The delay at low load is similar
53
4.9.2 Exercise (proofs on polling system)
Prove that in the polling system
N V̄
1. The average cycle duration is C̄ = 1−ρ .
ρ V̄
2. The average number of trials in a cycle is N̄C = 1−ρ ·N · S̄
.
ρ V̄
3. The probability that the test packet nds the queue not empty is P {NF1 > 0} = 1−ρ · S̄
.
Figure 71:
Consider a network with N nodes. Every node has an input with rate λij [pck/sec]. i means that is an input
Dene Ckj [bit/sec] is the average transmission rate on the k→j channel.
Dene λkj [pck/sec] the average arrival rate on the k→j link.
Dene Q̄kj [pck] the average number of packets in the k→j queue.
PN i
Moreover dene Λ, j=1 λj the total input rate for the whole network.
Applying Little's to the whole network
Q̄ 1 X
D̄ = = Q̄kj (1)
Λ Λ
(k,j)
Suppose that the ows are split in the network internal queues in order to guarantee stability for all of them.
54
Figure 72:
L̄ Fkj
ρkj = λkj S̄kj = λkj · ,
Ckj Ckj
Where Fkj [bit/sec] is called the ow at the input of the queue k → j.
ρ
Recalling the M/M/1 result that states Q̄ = 1−ρ , the average, thus
Fkj
Q̄kj = (2)
Ckj − Fkj
Hence, substituting in (1)
1 X Fkj
D̄ = (3)
Λ Ckj − Fkj
(k,j)
This is the basic formula for rough routing planning of packets network in order to have stable networks. Thus
This is a typical constraint minimization problem of a function (D̄) with constraint unknowns (Fkj ). In operative
research are studied problems like this one.
• C13 = 1 [kb/s]
• L̄ = 1 [kb/pck]
55
Figure 73:
From the gure is evident that λi2 = λi23 and λi1 = λi12 + λi13 .
1. First try to route the most of the trac through the shortest path.
Thus, all the trac of the ow λi12 can be sent through
1
1 → 2. F12 = λi12 L̄ = 1 ≤ C12 [kb/s].
Likewise λi23 goes through 2 → 3.
i i
Dierently λ13 cannot be all routed on 1 → 3, because F13 = 1 = C13 [kb/s]. The queue will be not stable
because ρ = 1.
The solution is to route a part of the ow, β, through 1→3 and another part, 1 − β, through 1 → 2 → 3, like
it follows
βλi13 1→3
(1 − β)λi 1→2→3
13
1 F12 F13 F23
D̄ = + +
Λ C12 − F12 C13 − F13 C23 − F23
(3)
with
F12 = λi12 + (1 − β)λi13 L̄ = 1 + (1 − β)
F13 = βλi13 L̄
=β
F23 = λi23 + (1 − β)λi13 L̄ = 1 + (1 − β)
Λ=1+1+1 =3
hence
1 1 + (1 − β) β
D̄ = 2· + .
3 1 − (1 − β) 1 − β
56
The optimal routing is the one that minimize D̄. Then
∂ D̄
D minimum ⇒ =0 ⇒ β = 0.66
∂β
some math
Figure 74:
The trac in the network is uniform. Every node transmit with equal probability to every other node. The
input rate, λi , is the same for every node. The input and the output rate are equal, λi = λo . This trac is said to
be uniform.
Every node is connected to others p nodes via unidirectional links. The transmission capacities of the link are
the same for every link. In every network of this kind there are p·N unidirectional links.
Every node in this kind of network sees the same topological situation.
5.2.2 Shuenet
Shuenet is a multi-ring network. Is not planar and it can be built on a cylinder.
57
Figure 75:
5.2.3 Throughput
All the queues are equal. Thus
Dene q̄ the average numbers of packets in a queue, d¯ the average delay of the queue, L̄ the average size of the
L̄
packets and D̄p the average transmission time of a packet (
C ).
The utilization fraction of the link is
L̄
ρ = λD̄p = λ
C
Little's law applied to the single queue is
ρC ¯
q̄ = λd¯ = L̄ d
ρC
λ= L̄
Q̄ = Np · q̄
D̄ = H · d¯
where H is the average number of hops from the source to the destination. Thus Little's applied to the whole
network is
D̄ =
Λ |{z} Q̄
|{z} (1)
H d¯ Np ρ C
L̄
d¯
58
Figure 76:
Figure 77:
Note that ΛL̄ is the aggregate throughput of the whole network. Thus, the normalized throughput T is
ΛL̄ p
S= = ΛD̄p = N ρ (2)
C H
(1)
Evident is that the average number of hops H has to be minimized to maximize the throughput S. This routing
is called shortest-hop.
59
Figure 78:
N ·p
SM AX =
H
To improve the capacity of a network (how many useful b/s does the network can eort? SM AX C )
• Increase C: with fasters transmitters we can improve the number of useful bit per second the network can
• Reduce H: choose a routing algorithm able to reduce the average number of hops.
H · D̄p
D̄ = H · d¯ =
1−ρ
Where H · d¯ is the time requested to do H hops. Thus, the normalized delay
D̄ H
D̂ = =
D̄p 1−ρ
SM AX p
= .
N H
The question is how the topology inuence the throughput?
Indeed Manhattan, Shuenet and Bidirectional Ring networks are all 2-connected networks. If all the links cost
the same a question could be: which topology is capable to minimize the average number of hops H? There is a
dX
M AX
n(i)
Hmin = i·
i=1
N −1
60
Figure 79:
1
where dmax is the diameter of the network, n(i) is the number of nodes i-hops away from S and
N −1 is the
probability of transmitting to an i-hops away node with uniform trac.
The Moore network is that theoretical network where every node sees a p-ary tree topology. This network is
the most compact p-connected network, thus the one that minimize H.
In a Moore network is valid the following
X−1
dmax dX
max
pk ≤ N ≤ pk
k=0 k=0
pdmax − 1 pdmax +1 − 1
≤N ≤
p−1 p−1
From this
1
dX
max
1 X−1
dmax
p dmax
− 1
!
H = k · n(k) = k · pk + dmax N −
N −1 N −1 p−1
(M oore) k=0 k=0
61
Figure 80:
In this example is studied the p=1 and p=2 lanes in the roundabout.
62
Figure 81:
5.4.2 Throughput S
Np
S= · ρ (A)
H
where ρ is the utilization fraction of the links (internal).
Furthermore
L̄
S= Λ· = Λ · Dp (B)
C
That is the average number of arrival in a client transmission time.
Np H i
ρ = N λi Dp ⇒ ρ = λ Dp (1)
H p
Suppose that the roundabout is a slotted ring. If this is true then ρ = P {f ull slot}.
The ow balance at the node is showed in the following gure.
S
The node throughput is given by Sn = N . It coincides with the average number of clients injected/absorbed
per node in Dp . Further
p
Sn = Sinj = Sabs = λi Dp = ρ (2)
H
Now we go further in the injection and absorption throughputs investigation.
Sabs = ρ · r · p (3)
where ρ is the probability that a slot is full and r is the probability this slot is the destination.
Sabs is the average number of clients absorbed (exiting from the roundabout) per slot per node, thus the
probability of absorption.
63
Figure 82:
1
r= (4)
H
that means physically that there will be an absorption every H hops on average.
Injection Sinj For the injection, Sinj is the average number of client injected per slot per node, thus the probability
of injection. Specically
p
(1 − r) p 1
Ps = 1 − ρ · − · −
|{z} |{z} ( |{z} |{z} | {z } ) = 1 ρ (1
H
) (5)
always but f ull slot and not absorbed
Ns
X
S= Dp = Ns S (6)
j=1
64
Figure 83:
where Ns is the number of slots needed to to inject a car in the the roundabout.
thus N̄s = 1
Ps and N¯s2 = 2−Ps
Ps2 , then
Dp 2 − Ps
S̄ = and S¯2 = Dp2 (7)
Ps Ps2
D D
Thus by Little's law: P {empty queue} = 1 − λi S̄ = 1 − λi Psp ⇒ P {at 1 client in the queue} = g = λi Psp . Thus
expression of g is found.
λi S¯2 λi Dp · 2−P
Ps2
s
W̄ = = Dp
2(1 − λi S̄) P −λi D
2 · s Ps p
(7)
Dene X , λi Dp .
H
From (1) ⇒ ρ = p X. p p
H 1 H−1
From (5) ⇒ Ps = 1 − p 1− H X =1− p X .
Thus
2
X 1− H−1 p − 1
( p X)
W̄ = Dp · p
2 1 − X − H−1p X
p
H−1
The rst denominator that goes to zero (thus the one that cause instability) is 1−X − p X , then
65
Figure 84:
1
p = 1 → 1 − 2X = 0 ⇒ X = = 0.5
2
X 2 √
p=2→1−X −( ) = 0 ⇒ X = 2( 2 − 1) ' 0.82
2
Hence, the normalized saturation throughput
0.5
p = 1 ⇒ λiM AX =
DP
0.85
p = 2 ⇒ λiM AX '
DP
D̄ = W̄ + H · Dp − Dp + S̄
| {z } |{z}
propagation Injection
66
hence, normalizing
D̄ W̄ S̄
D̂ = = +H −1+
Dp Dp Dp
For X → 0 ⇒ D̂ = H , thus if the load tends to go to zero, then the delay time is just the time to do the hops.
Figure 85:
Note that with uniform trac all the streets are equal at the roundabout (unfair), when on the trac light is
The DTMC is said to be nite if the number of the states is nite, |S| < ∞.
67
Figure 86:
(1) is called the Markov memoryless property. In common words, in a Markov process, the future behavior given
the past and the present, depends only on the present. Thus the Markov process in independent from the past,
6
given the present .
Figure 87:
Denition (time-homogenous DTMC) Suppose that {Xn } is a DTMC. If ∀i, j S the 1-step transition prob-
p00 p01 ...
P = p10 p11 ...
pji ≥ 0
(2)
P|S|
i=0 pij = 1
Every element of the matrix is a probability and the columns sum to one. Because of those properties the matrix
6 In probability a random variable A is said to be independent from B if and only if P (A|B) = P (AB) = P (A)
P (B)
68
Pn
The process can be written as Sn = i=1 Xi = Sn−1 + Xn , thus if sn−1 is known the PMF doesn't depend from
Instance (increments process) The process Yn = Xn − Xn−1 with {Xi } i.i.d. is not a DTMC.
Indeed
Thus the PMF depends on the present and on the past. The Markov property is not satised.
n
X
= P {X0 = 10 }p10 p21 = pj,j−1 · P {X0 = i0 }
j=1 | {z }
| {z } start prob.
1 − step prob.
p1 (n)
p(n) = p2 (n) (state vector)
...
p(n) is wrongly said distribution at time n. This is the rst order statistic of the stochastic process {Xn }.
The distribution at time n can be found as it follows
X
pi (n) = P {Xn = i|Xn−1 = j}pj (n − 1) ∀iS (3)
jS
(T P )
In matrix form
p(n) = P(n)p(n − 1)
Hence, if p(0) is the initial distribution of the DTMC, recursively is trivial to obtain
69
p(n) = Pn · p(0)
invariant system with system matrix P. The time evolution of the PMF, p(n), is the response of the system to the
initial state p(0). The time evolution is related to the eigenvalues {λi } of the matrix P.
Figure 88:
Solution Represent the process with two states: 0 (idle) and 1 (busy).
P {X = 0|X
n n−1 = 0} = 1 − α
P {X = 1|X
n n−1 = 1} = 1 − β
thus the future is independent from the past given the present: it's a DTMC.
70
Figure 89:
F rom
z
(" }| #{
P = To 1−α β
α 1−β
The probabilities not specied in the text of the example are found knowing the the columns are PMFs. The
Let's see how the transition matrix evolves (reference values α = 0.1 and β = 0.2)
" # " #
2 0.9 0.2 0.83 0.34
P = =
0.1 0.8 0.17 0.66
" #
2 2 0.74 0.50
P4 = P
=
0.25 0.49
" #
8 0.68 0.62
P =
0.31 0.37
" #
16 0.66 0.66
P =
0.33 0.33
" #
2/3 2/3
lim Pn = (nth step matrix)
n→∞ 1/3 1/3
The columns of the matrix are exactly the same. For all possible starting distribution then
8
The limiting distribution (i.e. the probability of nd the system busy or idle) is the same for all the possible
7 Generally is proven that Pn = 1 β β (1−(α+β))n α −β
+ n ≥ 1.
α+β
α α
α+β −β
α
8 Generally is proven: w = 1 β P {Xn = 0}
= limn→∞
α+β α P {Xn = 1}
71
starting distributions, is the distribution at which the system tend to be.
B̄
F , (Busy f raction)
B̄ + I¯
Figure 90:
B and I are the sojourn times in the busy/idle state, thus their distributions are geometric distributions
P {B = n} = (1 − β)n β
P {I = n} = (1 − α)n α
1
B̄ β α
F = = = ⇒ F ≡ P {Xn = 1}
B̄ + I¯ 1
α + 1
β
α+β
Thus our interpretation of F as the probability of nd the system busy at the time n was correct X.
72
6.3 The limiting distribution
The limiting distribution is really important in the study of a DTMC. In this section we discover for which DTMC
Fact There are so many linear independent stationary distributions (z = P · z) as is the multiplicity of the
Fact There are some DTMC for which the limiting distribution doesn't exist (@w). Those DTMC are called
periodic.
There are some DTMC for which the limiting distribution exists but is dependent from the initial distribution
p(0).
Last, there are some DTMC where the limiting distribution exists and is unique. In those DTMC the matrix
(n)
j → i ⇒ Pij = P {Xn = i|X0 = j} > 0
Figure 91:
Denition (communication) The states i and j communicate if they are accessible one from each other. Alge-
braically
P (n1 ) = P {X = i|X = j} > 0
ij n 0
i↔j⇒
P (n2 ) = P {X = j|X = i} > 0
ji n 0
For communication is valid the symmetry (i ↔ i) and the transitivity (if i↔j and j↔k then i ↔ k ).
Denition (class) Two states are belonging to the same class if they communicate.
Two dierent classes are disjoint because if they have a common state then they will communicate.
73
Denition (irreducible DTMC) A DTMC is said to be irreducible if it has only one class.
It can be proven that for a irreducible DTMC the eigenvalue λ = 1 has multiplicity 1. Hence, exists and is
Figure 92:
Some examples This is a DTMC with 3 classes. The state 3 is said to be absorbent.
Figure 93:
Figure 94:
This is an example of an innite number of state and innite number of classes DTMC. This process is called:
This is an example of an innite number of state DTMC with only one class. This process is periodic and is
74
Figure 95:
0
...
1 ← j position
p(0) =
...
Figure 96:
Each return is a renewal. For the memoryless property of the DTMC at each time the chain goes back to j the
evolution starts again with the same probabilities of the rst time. Hence {Tj (i)} are independent and identically
distributed.
1 Xn = j
Ij (n) =
0 else
Pn
Thus the total number of returns to j at time nis R(n) = k=1 Ij (k).
The expected number of return in j at time nwill be then
75
Figure 97:
n n n
(k)
X X X
E[Rj (n)|X0 = j] = E[Ij (k)|X0 = j] = P {Xk = j|X0 = j} = Pjj
k=1 k=1 k=1
(k)
where Pjj is the probability that the process goes from j→j in k steps.
∞
(k)
X
E[Rj (∞)|X0 = j] = Pjj = ∞
k=1
Figure 98:
Fact The recurrency is a class property. Indeed, if j ←→ i and j is recurrent, then i is recurrent too. Same
Hence if a DTMC is irreducible (only one class) all the states will be transitory or recurrent.
76
Moreover if the DTMC is irreducible with a nite number of states the states will be recurrent (because exists
Denition (periodicity) The state j is said to be periodic with period d if it can be revisited only at times
(k)
multiple of d. Formally Pjj =0 if n is not a multiple of d.
Figure 99:
Fact All the states belonging to the same class have the same period.
In example 2 the states {0, 1} can be visited at times 2,4,6,.. and the states {2, 3} can be visited at times 4,6,8,..
. Hence the chain is periodic with period d = 2.
In example 4 the number of increments (+1) is equal to the number of decrements (-1). Hence the chain is
Figure 100:
As stated so far, the recurrence times Tj (i) are independent and identically distributed with average E{Tj },
that can be nite or innite.
In this game, i are the renewals and Rj (n) is the renewal counting process. Hence, by the renewal theorem, the
77
Rj (n) 0 if j is transitory
Fj (n) = −→ πj =
n 1
if j is reccurent
n→∞ E[Tj ]
In example 4
• if p > (1 − p) the chain drift towards +∞, hence all states are transitory.
• if p < (1 − p) the chain drift towards −∞, hence all states are transitory.
Denition (ergodic state) A state is said to be ergodic if it is positive recurrent and aperiodic.
Figure 101:
78
Denition (ergodic chain) A DTMC is said to be ergodic if it is irreducible, positive recurrent and aperiodic.
Note that a nite and irreducible DTMC is ergodic if it's aperiodic. Then the chain in the example 2 is not
ergodic.
Starting from the state j (X0 = j ) we observe the visits to a generic state i.
As always, dene the return indicator
1 if Xn = i | X0 = j
Iij (n) =
0 else
Pn
and the number of returns Rij (n) = k=1 Iij (k).
Figure 102:
Beside the rst interval, all the others are independent and identically distributed.
Rij (n) 1
lim = , πi > 0
n→∞ n E[Ti ]
w.p.1
that is the long-term fraction that the DTMC spends in i, for every arbitrary starting state j.
P
It is easy to see that iS πi = 1, i.e. a π is a PMF.
proof
P
Indeed, starting from X0 = j , the chain can visit any state iS , thus in the interval [1, n] iS Rij (n) = n.
P Rij (n)
Thus, iS n =1 and limiting now to innity
Fact For a irreducible and positive recurrent DTMC it is proven that π is the only stationary distribution, i.e.
1
π = Pπ , with π= E[Ti ] , iS .
79
Fact For an ergodic DTMC (i.e. irreducible, positive recurrent and with period 1) it is proven that π is the only
proof The times {Ti } are random variables with integer values (because of d=1 they can assume every integer
value).
So,
n n
(k)
X X
E[Rij (n)] = E[Iij (k)] = Pij
k=1
z }| { k=1
P {Xk = 1|X0 = j}
Moreover, by Blackwell theorem, it exists
1
lim E[Rij (n)] − E[Rij (n − 1)] = , πi ∀j
n→∞ E[Ti ]
but
n n−1
(k) (k) (n)
X X
E[Rij (n)] − E[Rij (n − 1)] = Pij − Pij = Pij
k=1 k=1
n
hence the element (i, j) in the matrix P .
Thus
π1 π1 ... π1
π2
n π2 ... π2
lim P =
π
= [π, π, ..., π]
n→∞
3 π3 ... π3
... ... ... ...
and, nally, this implies that
h i
w = lim P(n) = lim Pn p(0) = [π, π, ..., π]p(0) = π
n→∞ n→∞
transitory states and C1 , C2 , ... , Ck is a countable family of disjoint and irreducible subchains of recurrent states.
If a chain starts in T, after a nite amount of time it evolves in one of the recurrent classes Ci with a certain
probability pi . Once the chain enters in a recurrent class then it will stay there forever. A brief interpretation in
Note that the transitory states can be divided in disjoint classes, but all of them are transitory. An instance is
80
Figure 103:
P1 0 0 ... R1
0 P2 0 ... R2
P=
0 0 ... 0 ...
... ... 0 Pk Rk
0 0 ... 0 Q
Indeed, an upper triangular matrix.
Where all the zeros are due to the fact that recurrent classes doesn't communicate one to another. Moreover is
it impossible, by denition of class, that from a recurrent class the chain goes to a transitory state.
Pi are stochastic matrices. Ri governs the transitions from the transitory states to the recurrent classes. Q
leads the transitions among transitory states.
(n)
P1n 0 ... R1
P2n
n
0 ... ...
P =
...
... ... ...
0 0 ... Qn
81
Figure 104:
(n) Pn−1
where Ri = j=0 Pn−1−j
i Ri Qj .
If the chain starts from a recurrent class, let's say C1 , we have
p̂(0) Pn1 p̂(0)
n 0 0
p(n) = P =
...
...
0 0
Thus means that a chain that starts in C1 stays in C1 (generally if starts in Ci stays in Ci ).
Moreover if n→∞ then the probability of being in a transitory state will be null (i.e. Qn → 0).
Note From these considerations it can be seen that if the chain has more recurrent classes it can't be ergodic.
Indeed, if p(0) is dened only in Ci , then the statistical properties of the chain will be those of the subchain Ci .
If we measure the statistics (time averages) of the whole chain they will converge to the statistics of the subchain
Ci . This because there are more stationary distributions, one for every recurrent subchain, excited by the initial
vector p(0).
Note In a periodic and irreducible chain doesn't exist a a stationary distribution, despite the fact that a statistic
6.7 Geo/Geo/1
Consider a time slotted system. Arrivals (Ak ) are independent slot by slot. Arrivals are Bernoulli distributed
A =1 w.p. p
k
∼
A =0
k w.p. (1 − p)
82
Figure 105:
The departures work this way: at the end of the slot, if there are packet to be sent, a diaphragm opens with
probability µ (it stays closed with probability 1 − µ).
It's asked to study the statistics of the process Qk = {number of clients in the queue at the end of the slot}.
where P {Dk+1 = 1|Qk + Ak+1 > 0} = µ and P {Dk+1 = 1|Qk + Ak+1 = 0} = 0. Remember Ak , Dk {0, 1}.
Therefore the statistics of Qk+1 will be independent from the past given the present Qk ⇒ it's a DTMC.
First step is to build the transition diagram of the DTMC from the updating law.
The states of Qk are 0, 1, 2, ..., ∞. The possible transitions are: the state increases of 1, the state decreases of 1
2. If Qk > 0 then Pi+1,i = pµ̄, Pi−1,i = p̄µ and Pi,i = 1 − p̄µ − pµ̄.
Where p̄ = 1 − p and µ̄ = 1 − µ.
If p>0 and µ>0 then the chain is irreducible and innite. If 1 − p̄µ − pµ̄ > 0 then it is aperiodic, otherwise
p00 p̄µ 0 0 ...
pµ̄ p11 p̄µ 0...
P=
0 pµ̄ p22 p̄µ T ridiagonal
0 0 pµ̄ ... ...
positive recurrent. If the chain is ergodic it will have a limiting distribution π (with π _i>0).
Evaluate the limiting distribution and thus the ergodicity conditions.
π = Pπ (1a)
|π| = 1 (1b)
83
Figure 106:
Figure 107:
π0 = (1 − pµ̄)π0 + p̄µπ1
i−1 + (1 − p̄µ − pµ̄)πi + p̄µπi+1
π = pµ̄π
i
f0 = pµ̄π0 − p̄µπ1 = 0
i−1 = pµ̄πi−1 − p̄µπi = pµ̄πi − p̄µπi+1 = fi
f
pµ̄πi−1 = p̄µπi
84
Figure 108:
π0
π1 = Y π 0
π2 = Y 2 π0
and so f orth up to
πi = Y i π0
therefore, if Y < 1 then πi > 0 ∀i, thus all states are positive recurrent, i.e. ergodic (moreover if Y > 1 all states
are transitory and if Y = 1 all states are null recurrent).
The condition Y < 1 implies p > µ, that is the stability condition of the queue Geo/Geo/1.
p
1− µ
π0 = 1 − Y =
1−p
Finally, remembering that server utilization fraction is
1
ρ = λS̄ = pS̄ = p ·
µ
thus
85
π0 (1 − p) = 1 − ρ
π0 (1 − p) is the probability that the system is idle at the generic given time t. Is that meaningful?
Figure 109:
Yes it is. π0 (1 − p) is the probability that the queue it's idle for all the duration of the slot.
thus
Y pµ̄
E[Qk ] = =
1−Y µ−p
Is that possible to know the average delay time using Little from this relation? Yes it's possible, but paying
Q(t) = Qk + Ak+1
E[Q(t)] = E[Qk ] + p
86
Figure 110:
So, Little's law is related to the long-term average occupation, averaging over all times (but for a set of points
E[Qk ] µ̄
E[Q(t)] = E[D] · p ⇒ E[D] = 1 + ⇒ E[D] = 1 +
p µ−p
Figure 111:
f or a single state
∀i and ∀n ⇒ P {Xn 6= i, Xn−1 = i} = P {Xn = i, Xn−1 6= i}
(A)
∀R and ∀n ⇒ P {Xn ∈
/ R, Xn−1 ∈ R} = P {Xn ∈ R, Xn−1 ∈
/ R} (B)
generally f or a macrostate R
In common words, the probability that the chain goes in R in the next step is equal to the probability that the
87
Proof Exploiting π = Pπ we obtain
X
πi = pij πj ∀i ∈ S (1)
j∈S
Where πi = P {Xn = i} and P {Xn = i|Xn−1 = j}. Thus by simple probability theory
X
X X {Xn = j}
pij πj = P {Xn = i, Xn−1 = j} = P {Xn = i, j∈S
}
j∈S j∈S | {z }
Sure event
Moreover, outgoing probabilities sum to one:
X
Pji πi
πi j∈S
= zX}| { (2)
| {z } pij πj
1 j∈S
i.e.
X X
pji πi = pij πj (3)
j∈S,j6=i j∈S,j6=i
X X
P {Xn = j, Xn−1 = i} = P {Xn = i, Xn−1 = j}
j∈S,j6=i j∈S,j6=i
X X
P {Xn = j}, Xn−1 = i =P Xn , {Xn−1 = j}
j∈S,j6=i j∈S,j6=i
XX XX
pji πi = pij πj
i∈R j ∈R
/ i∈R j ∈R
/
the terms pji πi in which both i and j belong to R disappear. We obtain the following
XX XX
P {Xn = j, Xn−1 = i} = P {Xn = i, Xn−1 = j}
i∈R j ∈R
/ i∈R j ∈R
/
X X X X
P{ {Xn = j}, {Xn−1 = i}} = P { {Xn = i}, {Xn−1 = j}}
j ∈R
/ i∈R i∈R
/ j∈R
P {{Xn ∈
/ R}, {Xn−1 ∈ R}} = P {{Xn ∈ R}, {Xn−1 ∈
/ R}}
Graphical interpretation Balance the ow of the closed surface that denes R.
88
Figure 112:
This balancing can be applied to every macro state R, i.e. at every surface that contains DTMC states. From
6.9 Geo/Geo/1/B
Figure 113:
The system works the same as before. The only dierence is that this one can hold only B users (buer has
dimension B ).
Of this system study the statistics of Qk .
Figure 114:
The only novelty is that the system is buered. In the last state (B ) works this way: if Qk = B , because of the
early arrivals and late departures assumption, the probability of going in B−1 is:
89
PB−1,B = µ
Thus, the steady-state distribution can be found with the ow balance
π · p̄µ = π
i i−1 · pµ̄ i = 1, 2, 3..., B − 1
π ·µ=π · pµ̄
B B−1
pµ̄
we dene, as always, Y = p̄µ and we obtain
πi = Y i π0 i = 1, 2, 3..., B − 1
π = Y B π (1 − p)
B 0
Normalizing
" B #
1 − Y B+1
X
i B
1 = π0 Y − Y p = π0 − Y Bp
i=0
1−Y
1−Y
π0 =
1 − µp Y B
(µ − p)Y B
πB =
µ − pY B
1 1
Note that for p = µ ⇒ Y = 1 ⇒ π0 = B+1−µ w B for B large.
(1)
Ploss = pπB
Fixed a maximum tolerable loss (e.g. Ploss < 10−6 ), increasing B we extend the range [0, pmax ] in which the
system works. Either way, when B ≥ 16, also doubling B , Pmax increases of a little bit.
Note It's a best practice to have some sort of checking methods to verify the results. A check can be evaluating
the throughput as the average number of packets outgoing from the network.
S = P {1 leaving at k} = |P {Qk−1 {z
+ Ak > 0} ·
}
µ
|{z}
Someone in here diagphram opens
hence
90
Figure 115:
S = (1 − π0 (1 − p)) · µ
This result has to coincide with the ingoing throughput: S = p(1 − πB ). There is a relation between π0 and πB
that can help us to verify our results.
Note An upper-bound on the block probability (πB ) can be found studying the queue Q∞
k (system with innite
buer):
∞
X
πB ≤ P {Q∞
k ≥ B} = (1 − Y ) Yi =YB
i=B
This boundary is valid for any queue, not only the Geo/Geo/1.
6.9.2 Delay
Let's study D̄ of the packets which enter in the system. Little's law on the stable part of the system yields to
91
Figure 116:
Figure 117:
from which
E[Qk ]
E[D] = 1 +
p(1 − πB )
where
B
X
E[Qk ] = iπi
i=0
B
When the queue reach the saturation, the buer is full. The delay time tends to B · S̄ = µ.
is zero). Qk = n represent the number of nodes full in the system at the end of the k th slot. When there is a
client on a node, it tries to transmit with probability q. If a new client arrives with a client already in service the
transmission is lost.
92
Figure 118:
Figure 119:
!
N −n
a(i, n) , P {Ak+1 = i|Qk = n} = pi (i − p)N −n−i i = 1, ..., N − n
i
If Ak+1 = 1 arrivals appear, then in the slot there are Qk +Ak+1 = n+i clients. Every client tries, independently
from the others, to transmit with probability q. There is a successful transmission if and only if only one client tries
93
Figure 120:
Figure 121:
Hence, if Qk = n > 0
and if Qk = 0
n+1
X
πn = πi pn,i
i=0
94
Figure 122:
1 − P00
π0 = π0 P00 + π1 P01 ⇒ π1 = π0
P01
1 − P11 P10
π1 = π0 P10 + π1 P11 + π2 P12 ⇒ π2 = π1 − π0
P12 P12
...
n
1 − Pn−1,n−1 X Pn−1,n−i
πn = πn−1 − πn−i
Pn−1,n i=2
Pn−1,n
Figure 123:
Analysis of results In the gure is depicted the throughput per user, as a function of the oered loadp (see
The curves grow laying on a line S=p until they start to be lost.
1
For low values of q (i.e. q = 0.005), then Sn (p) is monotone increasing. If q grows up to q= N , the curves stay
95
1 ∂Sn (p)
monotone and the value S grows. When q= N , the curves Sn (p) have a maximum (i.e.
∂p ). At the maximum
there is congestion, but we will see this concept later.
For every q, the saturation throughput is S(1) = N · q · (1 − q)N −1 (i.e. every node has a packet to transmit).
Figure 124:
Figure 125:
Now, ∀G ∈ [0, ∞] we obtain couples (S, D̄ ) and we plot a graph of them in the following gure.
For low q (e.g. q = 0.008) the curve is monotone increasing and the delay is high (there is a low probability of
transmission).
1
When q= N the curve is monotone and the delay is the minimum among all the monotone curves.
96
Figure 126:
1 ∂S
For q<N the curve is no longer monotone, there is congestion ( ∂ D̄ < 0).
N Q̄k
All points at full load accumulate over the limiting curve D̄ = 1 + 1+ Q̄k = N .
S (obtained from S , when
6.10.1 Dynamics
The evolution over the time of the process Qk can be obtained studying E[Ak |Qk−1 = n] and E[Dk |Qk−1 = n].
When the arrival rate is higher than the departure rate then, on average, Qk tends to increase. In the opposite
Consider now the two gures reported below. In the rst are visible the 3 points belonging to falling front of
the throughput (p = 0.007, 0.008, 0.009). In the second are shown the average arrivals/departures on the left and
shown below
The queue can be studied looking at Q(t) at the departure of a client. The update law of the discrete-time
process Qk is:
k+1 − 1
Q +B Qk > 0
k
Qk+1 (1)
B k+1 Qk = 0
Where Bk are the number of arrivals during the service time (Sk ) of the k-th client.
By the fact that arrival are Poisson (thus exponential interarrival time, i.e. memoryless) every departure is a
renewal. Remember that from that point on the process will proceed forgetting about the past. Indeed, what links
to the past the process is the residual time of the service Sk (that is General, i.e. not memoryless).
See that is identical to the Slotted Aloha one with N =∞ nodes. Let's nd the transition probability.
97
Figure 127:
ˆ ∞ ˆ ∞
e−λt (λt)i
bi = P {Bk = i|Sk = t}fS (t)dt = · fS (t)dt (2)
0 0 i!
(T P )
That can be computed in closed form knowing fS (t), the PDF of the service time.
Thus
n+1,n = P {Qkk+1 = n + i|Qk = n} = P {Bk+1 = i + 1} = bi+1
P Qk > 0
Pi,0 = P {Qk+1 = i|Qk = 0} = P {Bk+1 = i} = bi Qk = 0
The stationary distribution can be found computing numerically (iteratively as in Aloha). We obtain the
following
1 − P00 1 − b0
π1 = π0 = π0 (3.1)
P01 b0
n
1 − Pn−1,n−1 X Pn−1,n−i
πn = πn−1 − πn−i n ≥ 2 (3.1)
Pn−1,n i=2
Pn−1,n
98
Figure 128:
n
1 − b1 X bi
πn = πn−1 − πn−i n ≥ 2 (3.2)
b0 b
i=2 0
Note that
where ρ = λ · S̄ .
From (3.1) to (3.3) the steady state distribution π can be found.
99
Figure 129:
Figure 130:
Denition (MGF) Given a continuous random variable X, the Moment Generating Function is dened as
ˆ +∞
φX (s) , E[esX ] = fX (t)est dt ∀s ∈ C (where integral converges)
−∞
ˆ +∞
jωX
ΦX (s) = E[e ]= fX (t)ejωt dt ∀ω ∈ R
−∞
Denition (PGF) Given a discrete random variable X, the Probability Generating Function is dened as
∞
X
ΓX (z) = E[z X ] = pX (n)z n ∀z ∈ C
n=−∞
Those transforms are useful to evaluate sums of random variables and moments of random variables.
100
Figure 131:
Properties of GF
´
1. φX (0) = 1, because
R
fX (t)dt is equal one, because of normalization.
dk
2. E[X k ] = dsk φX (s) , this is known as Moments' theorem.
s=0
Properties of PGF
1. ΓX (1) = 1, always because normalization.
dk
2. E[X(X − 1)(X − 2)...(X − (K − 1))] = Γ (z) , the Moments' theorem for PGFs
dsk X z=1
1 dn
3. PX (n) = n! dsn ΓX (z) z=0 , that is known as the inversion formula for
X ∈ {0, 1, 2...}
π0 b0 b0 ... ... π0
π 1 b1 b1 b0 ...
π1
π = b
2 2 b2 b1 ... π2
... ... ... ... ... ...
explicitly
π0 = π0 b0 + π1 b0
π1 = π0 b1 + π1 b1 + π2 b0
...
Now, multiply the n-th row times zn, and sum over all n
∞
X ∞
X ∞
X ∞
X ∞
X
πn z n = π0 bn z n + π 1 bn z n + π2 bn z n+1 + π3 bn z n+2 ...
n=0 n=0 n=0 n=0 n=0
101
Q(z) = P (z)[π0 + π1 + zπ2 + z 2 π3 + ... ]
| {z }
z −1 (Q(z) − π0 − π1 z)
π0 (1 − z)B(z) = Q(z)(B(z) − z)
π0 B(z)(1 − z)
Q(z) = (4)
(B(z) − z)
Limit limz→1 the equation of Q(z), obtaining
−1
1 = π0 ⇒ π0 = 1 − E[B] = 1 − λS̄ = 1 − ρ
E[B] − 1
Blackwell
∞ ∞ ˆ ˆ ∞
!
∞ n ∞
X
n
X
−λt (λt) n −λt
X (λtz)n
B(z) = bn z = e z fS (t)dt = e fS (t)dt
n=0 n=0 0 n! 0 n=0
n!
(2)
ˆ ∞
B(z) = e(λz−λ)t fS (t)dt = φS (λz − λ)
0
S̃(λz − λ)(1 − z)
Q(z) = (1 − ρ) (5)
S̃(λz − λ)z
(4) states the PGF of Qk (discrete) related to S̃ (continuous). The inverse transform of (5) is (3).
Dierentiating with respect to z and then limiting limz→1 we obtain the expected value
λ2 S¯2
E[Qk ] = lim Q0 (z) = ρ+ (6)
z→1 2(1 − ρ)
you do the math
Little's law is used to compute the average delay time spent in M/G/1 queue
E[Q(t)]
E[D] =
λ
What's the link between E[Qk ] and E[Q(t)] (assuming that ρ < 1, then {Qk } is ergodic)?
102
Let's say
Figure 132:
In every queue in which: there are single arrivals and single departures or are single server with non-group
arrivals, then fraction of time in which an arrival sees Q=n is equal to the fraction of time in which a departures
∞
X ∞
X
E[Q(t)] = nπnt = nπnD = E[Qk ] (at departures!)
n=0 n=0
and hence
E[Qk ] λS¯2
E[D] = = S̄ + (note that is P K 0 s)
λ 2(1 − ρ)
(6)
With a further investigation we nd the PMF of the delay time (D ), not only its average value (D̄ ).
πnD = P {Qk = n}
ˆ ∞
= P {n arrivals|D = t}fD (t)dt
0
(T P )
103
ˆ ∞
(λt)n
= e−λt fD (t)dt
0 n!
As in relation (3), the PGF of this is
s · S̃(s)
D̃(s) = (1 − ρ) (P K transf orm f ormula)
s + λ + λS̃(s)
In this formula it's related the MGF of D to the MGF of S. The above formula can be inverted denoting s = jω
and evaluating the inverse Fourier transform.
6.12 M/G/1/B
Same queue as so far, but in this case it's buered.
Figure 133:
Figure 134:
π0 b0 b0 0 ... π0
π1
b2 b1 b0 0 ...
π1
π2 b3 b2 b1 b0 0 ... π2
= ·
...
... ... ... ...
...
...
b0
...
PB−2 PB−2 PB−3
πB−1 1− bn 1− bn 1− bn 1 − b0 πB−1
The last row is the sum of all the rows from B−1 to ∞ of the matrix we found in M/G/1 not buered.
104
The steady state distribution is then π
πi∞
πi = B−1
! i = 0, 1, ..., B − 1
X
∞
πn
n=0
| {z }
normalization
Since transition +1 and -1 are balanced, then also here πD , yet found, is equal to πA , the PMF seen from an
arrival that enters in the system. Contrary wise, the Poisson arrivals oered to the system see one more state:
B (i.e. full system). Hence the distribution π̂ A , seen by Poisson arrivals, has the state B in addiction. But the
A
components i ∈ {0, ..., B − 1} have to be proportional π . Finally
π̂iA
πiD = πiA = A
i = 0, 1, ..., B − 1
1 − π̂B
And by PASTA property π̂ A = π t this distribution coincide with the distribution over all times.
A
The unknown π̂ B can be found as it follows
ρ0 λ0 · S̄ 1 − π0t
z }| { = z A
{ ⇒ (1 − π̂B
}| )= (5)
1 − π0t λ(1 − π̂BA
) · S̄ λS̄
(1 − π0t ) D
⇒ π0t = π0A = · π0
λS̄
(4) (5)
π0D
π0t = (6)
π0D + λS̄
Substituting (6) in (3)
πiD
πit = π̂iA =
π0D + λS̄
(4) (3)
PB−1 iπiD t E[Q(t)]
Finally E[Q(t)] = 0 π0D +λS̄
+ < B · πB . By Little's then the average delay time is E[D] = λ0 .
105
Z −1 {Q(z)} → πn∞ = (1 − ρ)ρn n = 0, 1, 2, ... (geometric)
Thus (from relation (2)) the steady state PMF of the queue with buer B is
(1 − ρ)ρn
πnB = n = 0, 1, 2, ..., B − 1 (truncated geometric)
1 − ρB
The normalized throughput is ρ0 = λ0 S̄ = 1+
1
1−ρ . And the normalized delay comes out from Little's
ρ(1−ρB )
E[D] E[Q(t)]
D̂ = S̄
= ρ0 .
Figure 135:
reserving the channel. Each node has only one packet in the buer. A packet, which arrives in an idle slot, tries to
transmit with probability q. A packet, which arrives when a transmission occurs, is considered backlogged (it will
try to transmit after, always with probability q ). Every packet transmission must be followed by an idle slot (nodes
Dp
can transmit only after sensing an idle slot). The transmission acts over M = ∆ slots (assume M is an integer
number).
Dene
106
Figure 136:
Figure 137:
Update law and state diagram The update law can be found with the reasoning below.
Figure 138:
• transmission of a packet
• a collision
107
Hence
Q + arrivals on M + 1 slots A
k
Qk+1 = Qk + arrival on 1 slot B (1)
C
Q arrivals on 2 slots
k
If Qk = n, hence each of the N −n free nodes will have a packet at the end of the k+1 slot, with probability
P = 1 − e−λ(M −1)∆ A
a
Pb = 1 − e−λ∆ B (2)
P = 1 − e−λ2∆ C
c
!
N −n
aE (i, n) = P {Ak+1 = i|Qk = n, E} = pie (1 − pe )N −n−i (bimonial) (3)
i
following:
P {A|n} = P {1 T X well ended} = n · q · (1 − q)n−1
P {B|n} = P {0 T X well ended} = (1 − q)n (4)
P {C|n} = P {more than 2 T X well ended} = 1 − (1 − q)n − n · q · (1 − q)n−1 = 1 − (1 − q)n−1 (1 − q(1 − n))
The state diagram is the same as in the Aloha, but dierent transition probabilities.
= P {Ak+1 = i|Qk = n, B}P {B|n} + P {Ak+1 = i|Qk = n, C}P {C|n} + P {Ak+1 = i + 1|Qk = n, A}P {A|n}
From (3) and (4) we can nd the ingredients for the evaluation of the probabilities
P = ab (i, n)P {B|n} + ac (i, n)P {C|n} + aa (i + 1, n)P {A|n}
n+i,n
Qk = n > 0 ⇒ Pn+(N −n),n = ab (N − n, n)P {B|n} + ac (N − n, n)P {C|n} (5)
P {n − 1, n} = a (0, n)P {A|n}
a
P {B|∅}
Qk = 0 ⇒ Pi,0 = ab (i, 0) · | {z }
1
108
That is a special case included in the more generic formula (5).
Now we have all the ingredients to nd the steady state distribution π (always with the same iterative algorithm
pck
S = E[A] = E[D]
renewal
An we can evaluate it in two ways.
First way
N
X
S= E[A|ak = n]πn
n=0
Where E[A|ak = n] = E[A|Qk = n, A]P {A|n} + E[A|Qk = n, B]P {B|n} + E[A|Qk = n, C]P {C|n}, and A∼
Bin(N − n, Pa,b,c ), with average (N − n)Pa,b,c .
Using (2) and (3), then
Second way
N
X N
X
S= E[D|Qk = n]πn = n · q · (1 − q)n−1 πn (7)
n=0 n=0
math is below
Now we desire the throughput S in pck/slot, where a slot is equal to M mini-slots (the slots needed for a
transmission of a packet).
M
S = avg number of arrivals per slot = E[A] · E[C] (9)
z }| { z }| {
avg # arrival per slot avg # renewals per slot
Delay time and network number The average time spent in the system can be evaluated, as always, by
Little's.
109
Figure 139:
S
E[Q(t)] = z }| { · E[D] (10)
arrivals
z}|{
slot
slot
P
Where E[Q(t)] is approximated with Q̄ = n · πn .
Full load check When all nodes have one ready packet to send, then once more q= 1
N maximizes E[A] = E[D].
It is valid the following
1 N −1
E[D] = N q(1 − q)N −1 = (1 − ) → e−1
N
big N
Therefore
1 N −1
P {A|N } = (1 − ) → e−1
N
1 N
P {B|N } = (1 − ) → e−1
N
P {C|N } → 1 − 2e−1
M 1 1
S = e−1 · = '
2 + e−1 (M − 2) 2(e−1)
1+ M 1 + 3.43
M
This result is slightly dierent from the one we derived so far. This because here the model of the Ethernet
In gure is shown E[A], E[C] and S versus the oered load G = N λDp for a network with N = 30 nodes varying
the back-o q . M = 100 mini-slots. The curves E[A] are similar to those in the Slotted Aloha. Composing S and
Also in here is necessary that q varies in function of the load. Ie, q needs to decrease when the number in the
system increase. Indeed, if q is xed, the bistability phenomenon (already shown in Aloha) can appear.
110
Figure 140:
A DTMC in which the states are in part transitory and in part absorbent is said to be absorbing. Each ergodic
" # A T
I R z }| { z }| {
B= S = { 1, ..., r ; 1, ...., s } (1)
0 Q
Note If an ACM has more than r classes of states, then is not ergodic. Absorbing states are ergodic and they
Fact (Probability of falling in an absorbing state) In an AMC the probability of falling (ending) in an
1. What is the average number of visit at each transitory state i? Given T , the number of steps before absorption,
what is the E[T ].
3. What is the probability that the AMC ends in a given absorbing state?
111
Figure 141:
7.1.1 Answer to 1
From the canonical form of P is obtained the following
" Pn−1 #
I R· k=0 Qk
P = n
(2)
0 Q
(n) (n)
Where Qn = {qij }, in which qij = P {Xn = i|X0 = j} and (i, j) ∈ T .
Because of, by the denition of transitory state, the probability of the chain to be in a transitory state when
n→∞ is 0, then
lim Qn = 0
n→∞
Therefore the series I + Q + Q2 + .... converges to (I − Q)−1 (as it happens with scalars).
The matrix
n
N =(I -Q)
112
Figure 142:
Interpretation of the elements of N Suppose that the AMC starts in the state τ ∈ T (transitory) : {X0 = j}.
Dene the visit indicator function of the state i∈T as the following
1 {Xn = i} i ∈ T
Ii (n) = (3)
0 elsewhere
n
X
Vi (n) = Ii (k) (4)
k=0
n n
(k)
X X
E[Vi (n)|X0 = j] = P {Xk = i|X0 = j} = Pij (5)
k=0 n=0
(k)
Remember Pij is the element in position (i, j) in the matrix P k. Therefore
n
(k) (0) (1) (n)
X
= qij = [qij + qij + ... + qij ]
k=0
113
(k)
Where qij are the elements of the matrix Qk . Limiting for n → ∞, then
∞
(k)
X
nij , E[Vi (∞)|X0 = j] = qij
k=0
Therefore
∞
X
{nij } = Qk = (I − Q)−1 = N
k=0
Hence: the (i, j) element of N represent the average number of visits E[Vi (∞)|X0 = j] to the state i (transitory)
given that the chain is started in j (transitory too).
adding over all transient i, the total number of visit V (∞) before absorbing, starting from X0 = j , is
X X
E[V (∞)|X0 = j] = E[ Vi (∞)|X0 = j] = nij = ((1, ..., 1)N )
i∈T i∈T
LIN
Suppose that the AMC starts at t=0 from the transitory states with a given probability p̂(0). From the Total
Probability Theorem
s
X
E[V (∞)] = E[V (∞)|X0 = j]P {X0 = j} = (1, ..., 1) · N · p̂(0) (7)
j=1
Moreover
s
X
E[Vi (∞)] = E[Vi (∞)|X0 = j]P {X0 = j} (8)
j=1
TP
Hence
E[V1 (∞)]
E[ v(∞) ] = N · p̂(0) = ... (9)
(9) is the vector of visits starting from the distribution p̂(0). The sum of the elements in the vector will give
E[V (∞)].
(5)
s
z }| !{
X ∞
E[Vi (∞)] = · P {X0 = j}
X
E[Ii (k)|X0 = j]
j=1
(8) k=0
114
∞
X Xs
= E[Ii (k)|X0 = j] · P {X0 = j}
k=0 j=0
∞
X ∞
X
= E[Ii (k)] = pi (k)
k=0 k=0
Figure 143:
We are now almost done with the answer to the question 1. The only thing we are missing is the absorption
time T.
1 {Xn is in A at time n}
IA (n) =
0 {Xn is in T at time n}
Figure 144:
From here is trivial to see that V (∞) = T . Now, E[T ] is known but we are also interested in the T's PMF.
7.1.2 Answer 2
Consider the state vector at time n:
p1 (n)
... A
p (n)
r
p(n) =
p̂1 (n)
... T
p̂s (n)
hence we have
115
X
pi (n) = P {Xn ∈ A} = E[IA (n)]
i∈A
But if Xn is an absorbing state, then it arrived there before n, i.e. {Xn ∈ A} ⇒ {T ≤ n}. Moreover this relation
is valid also in the other direction {T ≤ n} ⇒ {Xn ∈ A}. Therefore, by the fact that the two events are identically
X
P {T ≤ n} = pi (n)
| {z }
i∈A
CDF of T
This is a numerical algorithm to compute the CDF of T.
7.1.3 Answer 3
From the relations (2) and (2.1) we have
" #
n ∞ (∞) I RN
lim P = P = {pij } =
n→∞ 0 0
hence
(∞)
pij = P {X∞ = i|X0 = j} = {RN }i,j i ∈ A, j ∈ T
Where P {X∞ = i|X0 = j} is the probability that the AMC ends up in i, given that it starts from j.
Figure 145:
116
Figure 146:
Denition (CTMC) A process {X(t), t ≥ 0} is said to be a Continuous Time Markov Chain if it assumes
discrete values in a countable set of states S and it has the Markov property.
Denition (Time homogeneous CTMC) A CTMC is said to be time homogeneous (or simply homogeneous)
if ∀s ≥ 0 and ∀t ≥ 0
Where pij (t) is called transition probability from j to i in the interval (0, t).
Denition (Transition matrix) For a time homogeneous CMTC is dened the transition matrix as the following
117
Figure 147:
(λt)i−j e−λt
P {A(t) = i|A(0) = j} = pji (t) = P {(i − j) arrivals in (0, t)} = i≥j
(i − j)!
Thus, this process is a CTMC because it satisfy the Markovity.
Proof Markovity says that every instant tn is a renewal (from that instant on the process continues without
memory).
Homogeneity means that the origin of the time axis is moved to the beginning of the sojourn in i.
Suppose now that X sojourned in the state i for s seconds. Hence is veried the event A = {Ti ≥ s} =
{chain stays in i ∀t ∈ [0, s]}. Given that what's the probability that it stays there t more seconds?
P {Ti ≥ t + s|Ti ≥ s} =?
118
Figure 148:
? = P {Ti ≥ t}
That shows that Ti ∼ exp(·). Because exponential distribution is the only distribution that satisfy this property.
of an embedded DTMC, in which qii = 0. At each transition the CTMC spends an exponentially distributed time
119
Figure 149:
p0 (t)
p(t) = p1 (t)
...
This vector is named distribution of X at time t.
Let's now analyze the time evolution this vector. As for the DTMC, the PMF at time t+δ is obtained
conditioning on X(t)
X
∀i ∈ S pi (t + δ) = P {C(t + δ) = i|X(t) = j}pj (t)
j∈S
TP
In matrix form
p(t) = P (δ)p(t)
p(t + δ) − p(t) P (δ) − I
lim lim
= δ→0 · p(t)
δ→0 δ δ
| {z }
Γ = {γij }
Where Γ is called innitesimal generator matrix.
Thus
d
p(t) = Γ · p(t) (Chapman − Kolmogorov CK)
dt
This equation has the following solution
120
As the state of a LTI system with system matrix Γ.
ˆ s ˆ ∞
P {1 transitions in (0, δ)} = P {Tj > δ, Tj + Ti > δ} = νj e−νj t νi e−ντ dτ dt
0 δ−t
∼ exp
νj
e−νi δ − e−νj δ
=
νj − νi
you do the math
νj
= (1 − νi δ + O(δ) − 1 + νj δ + O(δ)) = νj δ + O(δ)
νj − νi
taylor
Therefore
p (δ) = P {0 transitions in (0, δ)} = 1 − ν δ + O(δ)
jj j
f or i 6= j
p (δ) = P {1 transition in (0, δ)} · q = q ν δ + O(δ)
ij ij ij j
Thus, by denition of Γ
pjj (δ)−1 1−νj δ+O(δ)−1
γ = lim
jj δ→0 δ = limδ→0 δ = −νj < 0 (!)
pij (δ) qij νj δ+O(δ)
γij = limδ→0 δ = limδ→0 δ = qij νj > 0
Note that
X
qij ν = ν
X
γij =
j j
i∈S,i6=j
i∈S,i6=j
| {z }
=1
that is the transition rate out of j.
P
By the fact that γjj = −νj , then i∈S γij = 0 (i.e. the columns of Γ sum to zero).
Moreover is possible to evaluate the elements {qij } from the transitions rates.
P γij j 6= i
γij
qij = i6=j
0 j=i
121
Thus in matrix form
Q = I − [diagΓ]−1 Γ
1
γ11
−1 ..
where [diagΓ] =
.
.
1
γnn
limt→∞ p(t) = π
Γ·π =0
X
γij πj = 0 ∀j ∈ S
j∈S
X
νi πi = γij πj = 0 ∀i ∈ S (1)
j6=i
These last are called Whole Balancing Equations. The LHS of the equations represents the rate of going out
from i, being in i. The RHS of the equations represent the rate of going into i, being outside from i.
It is common to represent a CTMC with a state diagram (as we did for a DTMC). The arrows between a state
and another are are labeled with the transition rates γij (γii are omitted).
Fig A1.
Note that equations (1) are the ow balances to a given node. Those can be generalized to a given macro-state.
We introduce now thew : the steady-state distribution of the embedded DTMC. Thus
X
wi = wj qij ∀i ∈ S (W )
j6=i
X νj
πi = πj qij ∀i ∈ S
νi
i6=j
wi
νi
πi = P wj (2)
j∈S νj
122
9 Semi-Markov Processes
Denition (Semi-Markov process) A process X(t) is said to be Semi-Markov if it changes state according to
1
an embedded DTMC but the time spent in each state i depend from a generic PDF fTi (t) with expected E[Ti ] = νi .
Fact If w if the steady-state distribution of the embedded DTMC, then the fraction of time spent from the
wi /νi
πi = P ∀s ∈ S
j wj /νj
Proof Let be Vi (n) the number of visits to i in the rst n transitions and Ti (k) the time spent in i at the k th
visit. The fraction of time spent from X(t) in i in the rst n transitions is so
PVi (n)
T ime in i Ti (k)
=P k=1
T otal time PVj (n)
T (k)
j∈S k=1 j
t→∞
PVi (n) Vi (n)
k=1 Ti (k) Vi1(n) · n n→∞ E[T ] · wi wi /νi
P P i =P =π
P Vj (n)
Tj (k) Vj1(n) ·
Vj (n) → E[Tj ] · wj j wj /νj
j∈S k=1 n
LLN
It can be proven also in a long term sense the transition πi is the limiting probability that the system will be in
lim P {X(t) = i} = πi
t→∞
Process with memory In a Semi-Markov Process is no longer valid the Markov Property (future is independent
from the past given the present). Indeed, the future depends on how long X(t) has been in a given state.
123