4 views

Uploaded by AJAN A

When I searched in the internet about the markov chain i couldnt fine much more.Here i think this write up will help you to understand the basics of the markov chain.

save

- Redes Neuroanles
- Week2.pdf
- 1-s2.0-S0167691115001784-main
- CAST Lecture
- 2005 Railway
- COMPRE~1.DOC
- Bus
- Chap2-X2
- Tutorial on Monte Carlo Sampling.pdf
- lecture3-LocationManagement
- HiddenMarkovModels-RobertFreyStonyBrook.pdf
- Non-gradient systems
- A.madurska
- Fast Inference and Learning in Large-State-Space HMMs.pdf
- 1997PNPM-FSPNsimul
- Dynamic Pricing and Stabilization of Supply and Demanding in Important Power
- Solutions to Chapter 19 Problems 2014 Structural and Stress Analysis Third Edition
- Simplex Example
- 1708.02255v2
- symbolicdyn
- MAT 116 AID Peer Educator-mat116aid.com
- 21trans2
- MA6451-Probability and Random Processes
- COURSE oUTLINE
- Control Tutorials for MATLAB and Simulink - Suspension_ System Modeling
- CH12RP
- Control-Root Locus
- NT 256 Matlab Modelo Estocastico Ambiental
- Transportation
- markov hjm
- What is an opamp
- Solar Plant
- Substation Visit Nelamangala
- UPFC
- Energy Audit

**“Markov process is a process that tells the evolution of a new state from an old state.”
**

New state=f(oldstate,noise)

Lecture outline

Check out counter example

N-step transition probabilities

Classification of states

Example: Consider a checkout counter case. There will be 10 persons in the queue. At a time the

following possibilities are there.

Let the customer arrivals be p,and customer served is q

Customer arrival and no departure p(1-q)

Service completion and no customer arrival q(1-p)

Arrival and departure pq

Nothing will happen (1-p)(1-q)

Finite state Markov chains

Let xn-State after n transitions: belongs to a finite set (1……………………..m)

X0 is either given or random

X0-------------------xn

Initial final

Markov transition

The state xo can change to final state xn after n transions.But the intermediate transitions are random.

The transitions between the changes will happen randomly.

Markov property/Assumption

Markov assumes that the given state is not depends on the past states.

Pij =P(xn+1=j|xn=i) where i is the current position

=P(xn+1=j|xn=I,(xn-1…………………x0)

This statement says that if we know all possible information about the current positions the past

informations can be neglected.

Model specifications:

Identify the possible sets

Identify the possible transitions

Identify the transition probabilities

Example:

Consider the case of a projectile. For predicting its future positions both time and velocity is required. If

any of the above information is missing we need the past position of the projectile to complete the

trajectory and find out the future position. So when we are selecting a state variable we have to collect

all informations regarding it and this information should have some relevance to the future state and it

may or may not include every possible transition states.

N step transition probabilities

Rij(n)=P(xn=j,x0=i)

In zero transition

Rij(0)=1 if i=j

=0 i!=j

In single transition

Rij(i)=P(ij)

N-step transition diagram

Consider the following diagram.

For the probability of tavelling from I to j we can us the following recursive equation where m is number

of transitions.

m

Rij(n)=∑ rik(n-1)pk

K=1

For the random initial state

m

P(xn=j)= ∑P(x0=i)rij(n)

K=1

m

Rij(m)=∑pik.rkj(m-1)

K=1

Example:

N=0 N=1 N=2 N=100 N=101

R11(n) 1 .5 .35 2/7 2/7

R12(n) 0 .5 .65 5/7 5/7

R21(n) 0 .2 2/7 2/7

R22(n) 1 .8 5/7 5/7

R11 and R21 having the same probabilities show that initial state is not depend on the final state. What

really happens is that the randomness coming during the transitions is washing out the information

about the initial state.

The probability of remaining in the state 2 is more because its more sticky in nature rather than the 1

st

state.

Contradictions in the convergence

N is off r22(n)=0 n is even r22(n)=1

Dependence of initial state in the probabilities

R11(n)=1

R31(n)=0

R21(n)=.5 as n-infinity

Recurrent state and transient state

Recurrent state is a state in which we can return even if we started travelling from that state.ie there

will be a return path to that state.

But all states that are not recurrent will be a transition state.

If the initial state should not depend the final state there should not be more than one recurrent state

- Redes NeuroanlesUploaded byHeńřÿ Łøĵæń
- Week2.pdfUploaded byRyan Davis
- 1-s2.0-S0167691115001784-mainUploaded bySukddesh Ragavan
- CAST LectureUploaded byhaha
- 2005 RailwayUploaded bycanakyuz
- COMPRE~1.DOCUploaded byJasiz Philipe Ombugu
- BusUploaded byRajasekar Panneerselvam
- Chap2-X2Uploaded bySuneev A. Bansal
- Tutorial on Monte Carlo Sampling.pdfUploaded byakozy
- lecture3-LocationManagementUploaded byapi-3717973
- HiddenMarkovModels-RobertFreyStonyBrook.pdfUploaded byPradeep Srivatsava Manikonda
- Non-gradient systemsUploaded byLuis Alberto Fuentes
- A.madurskaUploaded byPatrick Langstrom
- Fast Inference and Learning in Large-State-Space HMMs.pdfUploaded byravigobi
- 1997PNPM-FSPNsimulUploaded bysfofoby
- Dynamic Pricing and Stabilization of Supply and Demanding in Important PowerUploaded bywilliamb285
- Solutions to Chapter 19 Problems 2014 Structural and Stress Analysis Third EditionUploaded byAlain
- Simplex ExampleUploaded byIvan Man Kit Cheung
- 1708.02255v2Uploaded byfrobnicate
- symbolicdynUploaded byMarkoCar16
- MAT 116 AID Peer Educator-mat116aid.comUploaded byamma88
- 21trans2Uploaded byPotnuru Vinay
- MA6451-Probability and Random ProcessesUploaded bymohan
- COURSE oUTLINEUploaded byMohamedKeynan
- Control Tutorials for MATLAB and Simulink - Suspension_ System ModelingUploaded byAllan Ferreira
- CH12RPUploaded byAmar Mokha
- Control-Root LocusUploaded byahmed s. Nour
- NT 256 Matlab Modelo Estocastico AmbientalUploaded byAdriel Simeão
- TransportationUploaded byAyesha Sadia
- markov hjmUploaded byOren Cheyette