2 views

Uploaded by api-256504985

- Stochastic Processes Applications Lecturenotes
- Solutions
- Bright 1995
- Lecture 1- Intro
- Ex8_14
- Time-Varying Informed and Uninformed Trading Activities Qin_Lei_JFM_2005
- InTech-Multi Automata Learning
- T-PROGS
- Markov Chains
- stochastic-I-CTMC
- 5mc.pdf
- MODELLING ANOMALOUS TRANSPORT WITH EXTERNAL FIELD BY INTEGRATED SUBORDINATED BROWNIAN MOTION
- Plugin Science 28
- Web Search Algorithms and PageRank
- Stochastic Process
- Decision_Analysis_in_PH_AK.pdf
- final6711F08
- poisson
- 13-21
- Prediction of Contaminant Plumes (Shapes, Spatial Moments and Macrodispersion) (1)

You are on page 1of 32

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

The Lec t ur e Cont ai ns:

N-stage transition probability

Few relevant information which is important

Classification of the states of a Markov Chain

Recurrence

Wald's Equations

Examples

Transition probability matrices of a Markov chain

Periodicity

Generating functions

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_2.html[9/30/2013 12:45:22 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

N-st age t r ansi t i on pr obabi l i t y

Consider the following, i.e., at time the process is at state , (say state number ), while at time

, it is at state , (say state number ). Then the probability that from state number to

state number is given by , where

Pictorially we have the following diagram (Figure 1.16) to illustrate the one dimensional stochastic

process.

Fi gur e 1.16: I l l ust r at i on of c onc ept of t r ansi t i on mat r i x

Here the bl ue lines are or as

the case may be, while the r ed line denotes, , which can be expressed as one

stage transition probability values as the case may be. Finally the gr een line denotes , and in order

to avoid confusion we consider the initial and final states as i and j ONLY.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_3.html[9/30/2013 12:45:22 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Few r el evant i nf or mat i on w hi c h i s i mpor t ant

Consider Bi are mutually exclusive and exhaustive events, such that , which is the sure

event, i.e., technically the whole of the sample space and also assume , is true.

Now suppose and are events, such that we write as . Then what

may be of interest to us is to find

Now due to the fact that is true, we can write the following, i.e.,

, so with conditional probability now becomes

Pictorially we can denote this as following as shown in Figure 1.17.

Fi gur e 1.17: I l l ust r at i on of t he c onc ept of c ondi t i onal

pr obabi l i t y and j oi nt di st r i but i on

Hence we can write

as

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_3.html[9/30/2013 12:45:22 PM]

Hence: .

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_4.html[9/30/2013 12:45:23 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

So generalizing we can write

, , and so on.

Fi gur e 1.18: Movement f r om t o st at e t hr ough an ar bi t r ar y

posi t i on at any poi nt of t i me bet w een suc h t hat

Figure 1.18 thus shows how the stochastic process starts at state at time and finally goes to

state at time . What is interesting to note it how does this transition takes place. A general

understanding of the stochastic process movement would make it clear that the process could have

arbitrarily been at state at say any point of time between and , i.e., , and

the colour scheme of red, blue and green should make this arbitrary movement clear.

Using matrix notation we already have , such that , where it

means the element in the matrix and not that we simply multiply pij X pij. Thus it means that

. Similarly we have , hence .Thus generalizing we

have and also .Here it must be

remember that we are considering there is no structural breaks or change in the underlying distribution

i.e., . If that occurs then some where we would have the transition probability matrix

general structures as different, i.e., for t = m, m+1,, m* we have as the underlying distribution,

and afterwards from t = m*+1, m*+2,, m*+n the distribution is

Hence the basic thing required is to know the one step transition matrix is pij.

If and are given then , i.e.,

, i.e.,

,

Thus we have the probability in terms of i nt er mi t t ent pr obabi l i t y and the t r ansi t i on mat r i x

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_5.html[9/30/2013 12:45:23 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Cl assi f i c at i on of t he st at es of a Mar k ov Chai n

In this classification we will mention state j as being accessible from another state i , i.e., if

for some , here denotes the stage transition. In case if it not, then the state j

is not accessible from state i .

Ex ampl e 1.12

Let us consider a conditional probability matrix as given below

.

Few obser vat i ons f r om Ex ampl e 1.12

Then if the question is asked, is state 3 ac c essi bl e from state 1, the answer is No.

No state can be reached from state 1, i.e., state 1 is like a sink as and it is called the

absor bi ng st at e. Similarly once you reach state 4 you remain in that state for ever as

and it is also called the absor bi ng st at e.

Similarly .

It is possible to go from state 3 to state 1, i.e., 3 to 2 to 1, but reverse is not possible.

In case we have as well as , then the two states are c ommuni c at i ng between

themselves. So in the example given above we have

as 2 leads to 3 and also as 3 leads to 2, hence state 2 and state 3 are

c ommuni c at i ng st at es.

A set of states in a Markov chain is said to be c l osed if pij = 0 for i C and j C, what ever the

set C be.

If a subset of a Markov chain is closed then the subset also forms a Markov chain

If a Markov chain has no closed subset, i.e., if the Markov chain is itself not closed, then the

Markov chain is said to be i r r educ i bl e.

Further more if and , it implies that , which means we are only required to

show that for some .

Not e

Now suppose we know that and and since , then from Chapman

Kolmorogrov equations we can show that , is true, i.e., as and are

both true and this statement that is true will hold for some n 1. For the interested readers we

would advice that rather than panic they should wait till we cover the simple proof of Chapman

Kolmorogrov equations.

Consider a fixed, but arbitrary state and suppose at time it is at state , i.e., X

0

= i, and also

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_5.html[9/30/2013 12:45:23 PM]

define , i.e., at the step I am back at state

i .

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_6.html[9/30/2013 12:45:23 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Rec ur r enc e

Consider an arbitrary state, , which is fixed, such that for all integers, , we define:

, which is basically the probability that after having

started from the state it comes back to state for the first time ONLY at the

transition. The definition makes it clear that and for , and

we also define Now let us consider the simple illustrations (Figure 1.19 and Figure 1.20).

Fi gur e 1.19: Movement f r om st at e bac k t o st at e c onsi der i ng t hat

t her e may or may not be any r eoc c ur r enc e or vi si t s t o st at e dur i ng

t he t i me per i od

Fi gur e 1.20: Movement f r om st at e bac k t o st at e c onsi der i ng t hat

t her e may or may not be any r eoc c ur r enc es or vi si t s at st at e i n

bet w een

From both the figures (Figure 1.19 and Figure 1.20) it is clear that and and the first

return to state occurs at transition. If this return is denoted by event , then the events

are mutually exclusive.

Now the probability of the event that the first return is at the transition is which we already

know. Now for the remaining transitions we will only deal with those such that holds

true. In case it does not, we will not consider that.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_6.html[9/30/2013 12:45:23 PM]

So we have

for and we also have

Hence

as

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_7.html[9/30/2013 12:45:24 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Let us pictorially illustrate the concept in Figure 1.21, where we start at state at time and after

time it come back to state. If one observes closely the main difference with Figure 1.18 is the

fact that here we denote the states while in Figure 1.18 it was the time which was depicted along the

line.

Fi gur e 1.21: I l l ust r at i on of r et ur n of t he st oc hast i c pr oc ess t o

st at e ex ac t l y at end of t i me

The colour scheme in Figure 1.21 is quite easy to understand if we concentrate on the fact that it can

be green line starting at position at and returning to position at for the f i r st t i me.

While the blue would denote that reaches position once at time and then again the stochastic

process continues till it reaches position at for the sec ond t i me. Continuing with the same

logic we can have such visits to position many number of times but remembering that position is

reached at , i.e., .

Then we have

, from which it is easy to note that

for

for .

Utilizing this two formulae we easily get

Furthermore through simple induction we can show that

So the probability that the systems ever returns to its original state is given

Henc e

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_7.html[9/30/2013 12:45:24 PM]

f

i

= 1 that the system returns to state in a certain number of steps

f

i

= 0 that the system never returns to state

f

i

< 1 that the system may or may not returns to state i in a certain number of steps

One can understand that these transitional probabilities are dependent on (i) initial and final states and

(ii) time of transition from the initial to the final states, i.e., , where is of some

function form of , as well as difference in time periods.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_8.html[9/30/2013 12:45:24 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Wal d' s Equat i ons

Consider be the sum of random numbers random variables (r.v's), i.e., and remember

that are i.i.d random variables (r.v's), then . The basic concepts of Wald

are the founding stones based on which the rich branch of Sequential Analysis (Sequential Estimation,

Sequential Interval Estimation and Sequential Hypotheses) have developed.

Now if we have , then . Without repetition we would like to mention that for

the interested readers we would advice that rather than panic they should wait till we cover the simple

proof of Wal d' s equat i ons . Remember the importance of this equation stems from the fact that one

can also find variance which is given by .

St at i onar y t r ansi t i on pr obabi l i t y

When the one state transition probabilities are independent of time, i.e., , we say we have a Mar k ov

pr oc ess which has st at i onar y t r ansi t i on pr obabi l i t i es.

Thus if , then we all know the transition probability matrix is denoted as

, i.e., and this is the Mar k ov mat r i x or the t r ansi t i on pr obabi l i t y

mat r i x of the process. So generally the row of denotes the distribution or the collection of

realized values of under the condition that . If the number of states are finite then the

transition probability matrix, , is a square matrix. Also it would be true that we have

(i ) ,

(i i ) ,

Now the whole process, i.e., is known if we know the value of

, or more generally the probability distribution of the process.

Assume , given which we want to calculate which is

any probability involving ,such that using the axioms of probability.

Thus

(1.1)

By definition of Markov process we also have

(1.2)

Substituting (1.2) in (1.1) we have

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_8.html[9/30/2013 12:45:24 PM]

Further more in the next step we have

Using induction we finally have

To make things better for understanding and also to bring into light the usefulness

of Markov chains we give few more examples which we are sure would be appreciated by the readers.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_9.html[9/30/2013 12:45:24 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Ex ampl es 1.13

Let be discrete random variables such that the realized values of , are non-negative integer

values, such that and We must remember that the observations are

independent. From this we define two Markov processes which are.

Case 1: Consider , such that and assume (which is not at all difficult to do

so considering that is the initial conditions for any process which is known beforehand) , then

the Markov matrix is given by , where the fact that each row is exactly equal due to

the simple fact that the random variable is independent of

Case 2: Another class of important Markov chains is seen when we consider the successive partial

sums, of , , i.e., . By definition we have . Then the

process is a Markov chain and we can easily calculate the transition probability matrix as

Here we use the independence of

Thus writing the transition probability matrix we have it of the following form, which is

If the possible values of the random variables are permitted to be both positive as well as negative

integers, then instead of labeling the states by non-negative integers, as we usually do, we may denote

them with the totality of the state space, which will make the transition matrix look more symmetric in

nature, such that we denote it like

,

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_9.html[9/30/2013 12:45:24 PM]

where , and

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_10.html[9/30/2013 12:45:25 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Ex ampl es 1.14

Consider a one dimensional random walks, whose state space is a finite or infinite subset

of integers, such that a particle at state can in a single transition move to or state or it

can remain at the same state, . Now all these three movements has some probability such that we

, , and . It is also true that . So that we have (i)

, (ii) and . It is obvious that ,

and , hence given these set of information one can write the transition matrix as below

Let us consider a gamble as an example of simple random walk. Suppose that we have two persons,

say Ram and Shyam with initial amounts of A and B INR with them respectively. Consider the

probability of Ram winning one unit from Shyam is and the corresponding of losing one unit is ,

where and . So in case if we denote the fortune of Ram after n such change in

position, then clearly denotes a random walk. It is very easy to see that once the state reaches

either 0 or , the process remains in that state. This process is known as the gambler's ruin, and

in that case the transition probability matrix is given as shown below

Different variants of this game can be constructed so that we have different examples of random walk,

some of which are briefly discussed below in order to motivate the reader in the application aspect of

random walks.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_11.html[9/30/2013 12:45:25 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Case 1: Suppose that Ram has A amount of money and Shyam has infinite amount of money and

they play this gamble using an unbiased coin. In that case the transition probability matrix looks as

given below

Case 2: Suppose that Ram has A amount of money and Shyam has B amount of money and they

play this gamble using an biased coin, such that the probability of Ram winning one unit of money is

and losing one unit of money is . In that case the transition probability matrix looks as given below

Case 3: Suppose that Ram has A amount of money and Shyam has B amount of money and they

play this gamble using an biased die, such that the probability of Ram winning one unit of money is

when numbers 1 or 2 come, losing one unit of money is when numbers 5 or 6 come, and the outcome

of the game being a draw when numbers 3 and 4 come. In that case the transition probability matrix

looks as given below

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_11.html[9/30/2013 12:45:25 PM]

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_12.html[9/30/2013 12:45:25 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Ex ampl es 1.15

The significance of random walks is not only apparent in different examples of gambling, but also is also

evident (due to the reasonable and good discrete approximations) to many physical process which are

to do with diffusion of particles, or particles which are continuously colliding and being randomly

bombarded, say for example gas particles in a gas chamber which we can for theoretical situation

considering as adiabatic (i.e., no heat is being transferred into or out of the system) process, such that

when a particle collides with another particle or with the wall of the container it rebounds with the same

level of total energy. Thus as the particles are subjected to collisions and random impulses then its (any

single particle) position fluctuates randomly but it does describe a continuous path. For simplicity if we

consider the future position to be dependent on the present position, then the process denoted by

is such that is Markovian, where is the time. A discrete approximation to such a continuous

motion corresponds to a random walk. A classical example is the symmetric random walk, where the

state space is denoted is the totality of all integers and if the general transition probability matrix values

or elements is given by , where , , , and for the

symmetric random walk we

have ,

Now the question which is apparent is the fact that why are random walks so useful? Apart from the

above examples which are all related to gambling, can we find some interesting examples of random

walks? The answer is yes. Apart from gambling (which by itself is very interesting and exciting), random

walks are frequently used as approximations to describe a variety of physical process, e.g., diffusion of

particles. Let us give an example where we consider a gas particle in a box as shown in Figure 1.21,

whose initial position is A (x

A

, y

A

, z

A

) which are given by the Cartesian coordinate system X, Y and Z.

Now consider after some time the gas particle is at position B (x

B

, y

B

, z

B

).

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_12.html[9/30/2013 12:45:25 PM]

Fi gur e 1.21: A gas mol ec ul e movi ng r andoml y i nsi de t he c hamber

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_13.html[9/30/2013 12:45:25 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

As this particle is subjected to random collision and impulses, which happens due to the particle being

bombarded by all other gas particles, as well as the wall of the container, hence its position changes

randomly, though the particle as such can be described by its continuous path of movement. With the

important assumption that the future position of the particle, i.e., say B depends only on its present

position, say A, then the process denoted by is such that is Markovian, and here is the

time. A discrete approximation of this continuous process is provided by the random walk, and a

classical discrete version of the Br ow ni an mot i on is provided by the symmetric random walk. A

symmetric random walk is a Markovian chain where the state space is the continuous real line (we

consider a simple example here on), and the transition probability is given as below, i.e.,

, where , , , and for the symmetric

random walk we have ,

What is important to note about the stochastic process is the initial condition, based on which we can

find all the characteristics of the process. Now consider the transition probability matrix given as

,such that in case

, and , then state 0 is the absorbing state, such that once the process reaches state 0 it

continues staying there. Now when then state 0 acts as a reflecting state, like a

molecule rebounding after hitting the wall (Figure 1.22).

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_13.html[9/30/2013 12:45:25 PM]

Fi gur e 1.22: A gas mol ec ul e movi ng r andoml y i nsi de t he c hamber but

r eboundi ng f r om t he w al l s

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_14.html[9/30/2013 12:45:26 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Tr ansi t i on pr obabi l i t y mat r i c es of a Mar k ov c hai n

A Markov is completely defined by one step transition matrix and the specifications of a probability

distribution on the state of the process at time t=0. Given this, what is of main concern is to calculate

the n-step transition probability matrix, i.e., , where is the probability that the process

goes from state to state in transitions, thus we have and remember

this is a stationary transition probability.

Theor em 1.1

If one step probability matrix of a Markov chain is , then for any fixed pair of

nonnegative integers and , satisfying , where we define

Pr oof of Theor em 1.1

The proof is very simple. Consider the case when, , i.e., the event when we move from state to

in two transitions in mutually exclusive ways such that the first transition takes place from to

state and then from to state. Here . Now because of Markovian assumption the

probability of first transition from state to state is , and that of moving from state to is .

If the probability of the process initially being at state is , then the probability of the process being

at state at time is given by . What is of main interest to us is to find

, and for doing that we need to describe few important properties of Markov chain, which we

now again do in order to have a better understanding of this process.

Pr oper t i es

State is accessible from state if for some integers , , which means that state

is accessible from state if there is positive probability that in a finite transitions state can

be reached starting from state .

Two states, and , are each accessible to each other then the two states are said to

communicate and the notation is as follows, i.e., . In case two states do not

communicate then either (i) for all or (ii) for all or both are true.

Property of communicative (one should remember that communicative property is an

equivalence relationship)

Reflexivity : , i.e.,

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_14.html[9/30/2013 12:45:26 PM]

Symmetry : In case then

Transitivity : In case and , then . Now as is true hence

there exists an integer n, such that . Also being true, there exists an

integer m, such that . Consequently we have

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_15.html[9/30/2013 12:45:26 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Ex ampl es 1.16

Let us consider the example given below

0 1 2 3 4 5

0 1 0 0 0 0 0

1 0 0 0 0

2 0 0 0 0

3 0 0 0 0

4 0 0 0 0

5 0 0 0 0 0 1

From the matrix above we can easily find out the equivalence classes as: .

Ex ampl es 1.17

In case the matrix is of the following form as shown below:

0 1 2 3 4 5

0 0 1 0 0 0 0

1 0 0 0 0

2 0 0 0 0

3 0 0 0 0

4 0 0 0 0

5 0 0 0 0 1 0

Then the equivalence class is only one and it is

Equi val enc e c l ass and I r r educ i bl e Mar k ov c hai n

Let us now partition the totality of the states into equivalence classes, such that the states in

equivalence class are those that communicate with each other. A Markov chain is irreducible if the

equivalence property or relation induces only one class, i.e., all states communicate. Thus in

i r r educ i bl e Markov chain al l the states communicate with each other. For the example given below,

i.e.,

We have three equivalence classes, which are , , , such that

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_15.html[9/30/2013 12:45:26 PM]

1) true

2) true

3) false

4) false

5) true

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_16.html[9/30/2013 12:45:26 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Per i odi c i t y

If we denote the period of state as , then it denotes the greatest common divisor (also known

as HCF) of all integers for which . In case , , then we define .

For the case when the transition probability matrix is denoted by

, such that , then periodicity of each state is two (2).

Now in case if due to some reason for some , then periodicity of every state, , is one (1)

The periodicity of the following case, where the transition probability matrix is as given

has the periodicity of each state as n.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_17.html[9/30/2013 12:45:27 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Theor em 1.2

1) If , then

2) If any state has the value of periodicity as , then there exists an integer N (depending on ),

such that for all integers ,

3) If , then for all n (positive integer) sufficiently large

A Markov chain for which each state has periodicity of one (1) is called aperiodic.

Gener at i ng f unc t i ons

Let us first define the concept of general generating function. In case we have a sequence, , then

the generating function, , is defined as for the case when . Hence in a

similar manner the generating function for the sequence , , is given by ,

We know that if we have (i) , and (ii) , , then we can

write the product of as given below, i.e.,

, where

Let us identify with the and the with the and if we compare

for with we immediately obtain for

or for

One should remember that for is not valid for n = 0 and using simple

arguments we have

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_17.html[9/30/2013 12:45:27 PM]

for

where is the probability that the first passage from state to state occurs at the transition.

Again , and utilizing , we can easily show that

for can be written as ,

We say a state is recurrent iff , i.e., a state is recurrent iff the probability is 1 so that

starting at state it will return to state after a finite number of transitions. A non-recurrent state is

said to be transient.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_18.html[9/30/2013 12:45:27 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Before we proceed with some proves relevant for our stochastic processes we need to see and

understand two important proves, which are stated as Lemmas (Lemma 1.1 (a) and Lemma 1.1 (b))

below.

Lemma 1.1 (a)

If converges (i.e., ) then it implies (i.e., )

Pr oof of Lemma 1.1 (a)

What we will prove here is (see carefully this is what we have to prove as

written above)

Since converges then for any one can find such that holds true for all

values of , then choose that N and write

Now for we have

, where

being sufficiently close to 1 we have

[this can be arbitrarily done considering any combination of and , such

that can be made lesser and lesser to ].

Now we need to find the second term which is , hence we have

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_18.html[9/30/2013 12:45:27 PM]

Look carefully and we immediately note that the first term is bounded by , while the

second term is bounded by . Hence we have

Combining these two we have , provided being sufficiently close to 1.

Objectives_template

file:///E|/courses/introduction_stochastic_process_application/lecture2/2_19.html[9/30/2013 12:45:27 PM]

Module 1:Concepts of Random walks, Markov Chains, Markov Processes

Lecture 2:Random Walks

Lemma 1.1 (b)

(b) If and , then

Pr oof of Lemma 1.1 (b)

Since for , hence the case of is obvious. In case , then by

our hypothesis for , hence for . Now as is a

monotone increasing function in , hence it has a finite limit which will be equal to . Here remember

we utilize the result proved from (a).

- Stochastic Processes Applications LecturenotesUploaded bykhanglbtk
- SolutionsUploaded bySanjeev Baghoriya
- Bright 1995Uploaded byDaniel Bogiatzis Gibbons
- Lecture 1- IntroUploaded byMirvat Shamseddine
- Ex8_14Uploaded bynagesha_basappa865
- Time-Varying Informed and Uninformed Trading Activities Qin_Lei_JFM_2005Uploaded byalexa_sherpy
- InTech-Multi Automata LearningUploaded byalienkanibal
- T-PROGSUploaded byCristian Butron
- Markov ChainsUploaded byLudian Carlos Vitorelo
- stochastic-I-CTMCUploaded bySparrow Hawk
- 5mc.pdfUploaded byDimas Adip
- MODELLING ANOMALOUS TRANSPORT WITH EXTERNAL FIELD BY INTEGRATED SUBORDINATED BROWNIAN MOTIONUploaded byijmsa
- Plugin Science 28Uploaded byAsk Bulls Bear
- Web Search Algorithms and PageRankUploaded byPaula Larisa
- Stochastic ProcessUploaded byhopanz
- Decision_Analysis_in_PH_AK.pdfUploaded byJuwel Rana
- final6711F08Uploaded bySongya Pan
- poissonUploaded byChris Lewis
- 13-21Uploaded bySourav Banerjee
- Prediction of Contaminant Plumes (Shapes, Spatial Moments and Macrodispersion) (1)Uploaded bycgajendran_56098156
- Mathematical Model for Dynamic Damage Probability of the Repetition Pulse Rate High-energy Laser SystemUploaded byAnonymous 7VPPkWS8O
- Thesis M WildemeerschUploaded byfaizazohra
- Group chase and escape.pdfUploaded byhumejias
- Some Methods for Visualizing Stochastic PhenomenaUploaded byOlga Jakšić
- Affine Term-Structure ModelsUploaded bybarrie1985
- Shreve Stochcal4fin 1Uploaded byDaniel He
- Balancing Design Freedom and Brand Recognition in the Evolution of Automotive Brand StylingUploaded byiCal
- 16Uploaded byArnav Guddu
- Sample Final SolutionsUploaded byGina McCaffrey
- MStorms_Maratea_1999-1.pdfUploaded bydalmatino16

- LM-27Uploaded byapi-3832526
- Heat ExchangerUploaded byPratik S. Shah
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- 29 grinding3Uploaded byPRASAD326
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- Module 7: Short QuestionsUploaded bycaptainhass
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- 28 grinding2Uploaded byPRASAD326
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- CH28 Traffic SignsUploaded byMargareth Gutierrez
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985
- What Are Heat Exchangers for?Uploaded bycaptainhass
- UntitledUploaded byapi-256504985
- UntitledUploaded byapi-256504985

- 1Evaluating EVault a Guide for Credit Union Technical Decision MakerUploaded byPradeep Shukla
- Bs Bopfbasictraining 08queriesUploaded byjustforregister
- Anthropometry.pptUploaded byIvan Ng
- 8b/10bUploaded byDario Santos
- Gueleaya's Non-Verbal Predications From a Typo Logical PerspectiveUploaded byBaghour
- Case Daimler TRYCKUploaded byRavish Kumar
- 0051-0052 [31] Volumetric ApparatusUploaded byNattawat Teerawattanapong
- Chapter 07 Review QuestionsUploaded byJavier Amaro
- an approach to early music singingUploaded byapi-244594631
- Techniques and Materials of MusicUploaded byBen Mandoza
- GUIA noveno.docxUploaded byeduin
- H2SW_Rev_48Uploaded bySyah Kamal
- C2Quick.pdfUploaded byJasonChong212
- 6-Speed Automatic PSAUploaded byPapuciev Ivan
- 1st Law of ThermodynamicsUploaded byRavi Parkhe
- 304831138-Class-8-Nco-5-Years-eBook.pdfUploaded byrekha_1234
- Import MathUploaded bydanial
- Bio Inorganic ChemistryUploaded byDannie Peprah
- Janu ExapslUploaded bySungEun Choi
- 28394862 Short Cut KeysUploaded byjeetugeobhu
- Intel SSD Firmware Update Tool 2 1 7 Release Notes 015Uploaded bymeho
- Chemical EquilibriumUploaded byunbeatableamrut
- PasswordCracker.me - How I Cracked My Neighbors WiFi Password Without Breaking a SweatUploaded byPasswordCracker
- Compuware FileaidUploaded byPrashant Bobde
- regaNotes2Uploaded bytararichispon
- d5f0d544375547209ffa27cac0d76fa4.pdfUploaded bysai
- Alpha 21164 Hw Ref ManualUploaded byKorbinVD
- Tutorial No. 1 Free VibrationUploaded byNjuguna Waweru
- Method Irgamet39Uploaded byevoid
- Optical EthernetUploaded byjsingh19