Professional Documents
Culture Documents
then R(t) represents the total reward earned by the time t. Let
E[R(t)] E[R]
(b) limt→∞ t = E[X]
The reward is the number of packets transmitted successfully in a renewal cycle. Expected reward is
1 * Ps + 0 * ( 1- Ps ) = Ps . Therefore throughput S is
Ps
S= (81)
σ(1 − Ps + Pc ) + 2σPc + (E[L] + σ)Ps
97
Stochastic Modelling T G Venkatesh
The length of the cycle is a Random Variable defined as
�
X, if X ≤ 10
length =
10, if X > 10
Average long run cost per unit time is 5400/8.75 = 617.14 units per year.
cµN (N − 1)
E[cost of a cycle] = cµ[1 + 2 + .. + N − 1] =
2
Average cost incurred is
E[cost of a cycle] cµN (N − 1) c(N − 1)
= =
E[length of a cycle] 2N µ 2
Regenerative Process
Consider a stochastic process {X(t), t ≥ 0} with state space 0,1,2,..., having the property that there exist time points
at which the process (probabilistic-ally) restarts itself. That is, suppose that with probability 1, there exists at time T1
, such that continuation of the process beyond T1 is a probabilistic replica of the whole process starting at 0. Note that
this property implies the existence of further times T2 , T3 , .. having the same property as T1 . Such a stochastic process
is known as a regenerative process.
From the preceding, it follows that T1 , T2 , .. constitute the arrival times of a renewal process and we shall say that a
cycle is completed every time a renewal occurs.
Examples
• A renewal process is regenerative, and T1 represents the time of first renewal.
• A recurrent markov chain is regenerative, and T1 represents the time of the first transition into the initial state.
98
Stochastic Modelling T G Venkatesh
The length of the cycle is a Random Variable defined as
�
X, if X ≤ 10
length =
10, if X > 10
Average long run cost per unit time is 5400/8.75 = 617.14 units per year.
cµN (N − 1)
E[cost of a cycle] = cµ[1 + 2 + .. + N − 1] =
2
Average cost incurred is
E[cost of a cycle] cµN (N − 1) c(N − 1)
= =
E[length of a cycle] 2N µ 2
Regenerative Process
Consider a stochastic process {X(t), t ≥ 0} with state space 0,1,2,..., having the property that there exist time points
at which the process (probabilistic-ally) restarts itself. That is, suppose that with probability 1, there exists at time T1
, such that continuation of the process beyond T1 is a probabilistic replica of the whole process starting at 0. Note that
this property implies the existence of further times T2 , T3 , .. having the same property as T1 . Such a stochastic process
is known as a regenerative process.
From the preceding, it follows that T1 , T2 , .. constitute the arrival times of a renewal process and we shall say that a
cycle is completed every time a renewal occurs.
Examples
• A renewal process is regenerative, and T1 represents the time of first renewal.
• A recurrent markov chain is regenerative, and T1 represents the time of the first transition into the initial state.
98
Stochastic Modelling T G Venkatesh
We are interested in determining the long-run proportion of time that a regenerative process spends in the state j.
To obtain this quantity, let us imagine that we earn a reward at a rate 1per unit time when the process is in state j and
0 otherwise. That is I(s) represents the rate at which we earn at time s, then
�
1, if X(s) = j
I(s) =
0, if X(s) �= j
Total reward earned by time t is � t
I(s)ds
0
E[reward by time T1 ]
The average reward per unit time = E[T1 ]
E[Z]
Pon =
E[Y ] + E[Z]
E[on]
=
E[on] + E[of f ]
Pof f = 1 − Pon
E[of f ]
=
E[on] + E[of f ]
99
Stochastic Modelling T G Venkatesh
1. Pi , the proportion of time the policy holder pays at rate ri , i = 0, 1.
2. The long-run average amount paid per unit time.
Solution: If we say that the system is ’on’ when the policy holder pays at rate r1 and off when she pays at rate r0 ,
then this on-off system is an alternating renewal process with a new cycle starting each time a claim is made.
Let X be the time interval between two claims, then the on time in the cycle is smaller of s and X.
r0 P0 + r1 P1 = r1 − (r1 − r0 )e−λs
100