## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

1. Introduction Before we give the deﬁnition of a Markov process, we will look at an example: Example 1: Suppose that the bus ridership in a city is studied. After examining several years of data, it was found that 30% of the people who regularly ride on buses in a given year do not regularly ride the bus in the next year. Also it was found that 20% of the people who do not regularly ride the bus in that year, begin to ride the bus regularly the next year. If 5000 people ride the bus and 10,000 do not ride the bus in a given year, what is the distribution of riders/non-riders in the next year? In 2 years? In n years? First we will determine how many people will ride the bus next year. Of the people who currently ride the bus, 70% of them will continue to do so. Of the people who don’t ride the bus, 20% of them will begin to ride the bus. Thus: 5000(0.7) + 10, 000(0.2) = The number of people who ride bus next year. = b1 By the same argument as above, we see that: 5000(0.3) + 10, 000(0.8) = The number of people who don’t ride the bus next year. = b2 This system of equations is equivalent to the matrix equation: M x = b where 0.7 0.3 0.2 0.8 5000 10, 000 b1 b2

M= 5500

,x =

and b =

. For computing the result after 2 years, we just use the same matrix M , however we use b 9500 in place of x. Thus the distribution after 2 years is M b = M 2 x. In fact, after n years, the distribution is given by M n x. The forgoing example is an example of a Markov process. Now for some formal deﬁnitions: Deﬁnition 1. A stochastic process is a sequence of events in which the outcome at any stage depends on some probability. Deﬁnition 2. A Markov process is a stochastic process with the following properties: (a.) The number of possible outcomes or states is ﬁnite. (b.) The outcome at any stage depends only on the outcome of the previous stage. (c.) The probabilities are constant over time. If x0 is a vector which represents the initial state of a system, then there is a matrix M such that the state of the system after one iteration is given by the vector M x0 . Thus we get a chain of state vectors: x0 , M x0 , M 2 x0 , . . . where the state of the system after n iterations is given by M n x0 . Such a chain is called a Markov chain and the matrix M is called a transition matrix. The state vectors can be of one of two types: an absolute vector or a probability vector. An absolute vector is a vector whose entries give the actual number of objects in a give state, as in the ﬁrst example. A probability vector is a vector where the entries give the percentage (or probability) of objects in a given state. We will take all of our state vectors to be probability vectors from now on. Note that the entries of a probability vector add up to 1.

1

Note b =

xn 1 N (M − I). and any vector in N (M − I) is just a scalar multiple of xs .2 0.4 0.3 0.6 0.5 . P = 0.5 C2 → C3 .4 C1 → C3 .Theorem 3. that is if M = (mij ) and the states are S1 .5 0. C2 and C3 . is any non-zero vector in . then xs = c x where c = x1 + · · · + xn .3 C3 → C1 . Recall that M = (mij ) where mij 0.5 0. Therefore 0. Now we compute 1 U = 0 0 a basis for N (M − I) by putting M − I into reduced echelon form: 0 −0. Every second the protein molecule can make a transition from one conﬁguration to another conﬁguration with the following probabilities: C1 → C2 .2 0. .4 0.3766 2 . 0.7586 and we see that x = 0. P = 0.3377 Consequently. . If M k has all positive entries for some k.7 0. c = 2. .3 0. In particular if x = .2 −0.7586 is the basis vector for N (M − I) 0 0 1 is the probability of conﬁguration Cj making the transition to Ci . P = 0.3 M = 0.2857 is the steady-state vector of this process. Example: A certain protein molecule can have three conﬁgurations which we denote as C1 . .2 0.5 0. Notice that we have the chain of equivalences: M xs = xs ⇔ M xs − xs = 0 ⇔ M xs − Ixs = 0 ⇔ (M − I)xs = 0 ⇔ xs ∈ N (M − I) Thus xs is a vector in the nullspace of M − I.8966 0. Moreover limk→∞ M k x0 = xs for any initial state probability vector x0 . The Transition Matrix and its Steady-State Vector The transition matrix of an n-state Markov process is an n × n matrix M where the i. P = 0. The vector xs is called a the steady-state vector.2 C3 → C2 . S2 .2 and M − I = 0. j entry of M represents the probability that an object is state j transitions into state i. Then there exists a unique probability vector xs such that M xs = xs . What remains is to determine the steady-state vector. P = 0.2 Find the transition matrix M and steady-state vector xs for this Markov process.2 −0.6552 and xs = 0. then dim(N (M − I))=1 x1 . Sn then mij is the probability that an object in state Sj transitions to state Si .8966 1 −0. 2.2 C2 → C1 . Let M be the transition matrix of a Markov process such that M k has only positive entries for some k.4 −0. P = 0.

Note also that the nullspace of M − I can be found using MATLAB and the function null( ): x = null(M − eye(3)) xs = x/sum(x) 3 .

- me471s03_lec07
- Classification of high resolution sar images using textural features
- consd
- DSP_Lab1
- 12_0_markov_chains1
- [PDF] on the Rate of Convergence of Distributed Sub Gradient Methods for
- Correlation Topics 122901 1
- Markov Analysis
- Markov Analysis
- Urban Modeling
- fichthorn
- Queueing Systems
- 9781852338688-c1
- AbsCntntyMrkvProcGnrtrs
- A Proposal Study for Designing and Introducing Automated Tools and Procedures Incorporating MATLAB Package for Undergraduate Basic Discipline Studies
- Chapter 4
- Pure Mathematics Syllabus
- mp-204_103
- Lab03
- 10.1002_acs.896
- Suboptimality of Asian Executive Option
- Elliot_Financial Signal Processing_A Self Calibrating Model
- Mat 31121SummerISyllabus2016
- Latest Amendment OPP and DPP for NUC
- NOVEL ANALYSIS OF TRANSITION PROBABILITIES IN RANDOMIZED K-SAT ALGORITHM
- Stat 222 r Tutorial
- ME345 FALL14 HW13
- Langevin 1908
- An Intuitive Guide to Linear Algebra _ BetterExplained
- Random Series in Computer Graphics

- Convergence of EU regions. Measures and evolution (Eng)/ La convergencia de las regiones de la UE (Ing)/ EBko eskualdeen konbergentzia (Ing)
- rev_frbrich2013q1.pdf
- Speech Recognition Using Hidden Markov Model Algorithm
- tmp9649.tmp
- UT Dallas Syllabus for cs6375.501.10f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.501.07f taught by Yu Chung Ng (ycn041000)
- Personalized Gesture Recognition with Hidden Markove Model and Dynamic Time Warping
- Tmp 7832
- tmpFEC7.tmp
- tmpDB70.tmp
- UT Dallas Syllabus for cs4375.501 06f taught by Yu Chung Ng (ycn041000)
- Supply Chain and Value Chain Management for Sugar Factory
- tmpCE5.tmp
- tmpD7C3
- tmp51A9.tmp
- UT Dallas Syllabus for cs4375.001.11f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.501.09f taught by Yu Chung Ng (ycn041000)
- tmp79F5.tmp
- A Survey of Online Credit Card Fraud Detection using Data Mining Techniques
- As IEC 61165-2008 Application of Markov Techniques
- tmp3746.tmp
- The Impact of Exchange Rate on FDI and the Interdependence of FDI over Time
- UT Dallas Syllabus for cs6375.002 06f taught by Yu Chung Ng (ycn041000)
- tmpF98F
- tmp108.tmp
- UT Dallas Syllabus for stat7345.501.09s taught by Robert Serfling (serfling)
- UT Dallas Syllabus for stat7345.501.10f taught by Robert Serfling (serfling)
- A Study and Comparative analysis of Conditional Random Fields for Intrusion detection
- frbclv_wp1992-01.pdf
- rev_frbrich200604.pdf

- UT Dallas Syllabus for math2418.001.08f taught by (xxx)
- UT Dallas Syllabus for phys5301.501.11f taught by Paul Mac Alevey (paulmac)
- UT Dallas Syllabus for math2333.001.10s taught by (axe031000)
- UT Dallas Syllabus for ee2300.001 06f taught by Lakshman Tamil (laxman)
- UT Dallas Syllabus for phys3411.001.10f taught by Paul Mac Alevey (paulmac)
- UT Dallas Syllabus for ee6349.521.07u taught by Mohammad Saquib (saquib)
- UT Dallas Syllabus for math2333.503.10s taught by Paul Stanford (phs031000)
- tmp59CC.tmp
- UT Dallas Syllabus for engr3300.002.11s taught by Gerald Burnham (burnham)
- UT Dallas Syllabus for math2333.501.08s taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for math6319.501 05f taught by Viswanath Ramakrishna (vish)
- UT Dallas Syllabus for math2418.001.07f taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for math2418.001.08s taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for math2418.5u1.10u taught by ()
- Adoption of Parallel Genetic Algorithms for the Solution of System of Equations
- UT Dallas Syllabus for math2451.501 06s taught by Viswanath Ramakrishna (vish)
- UT Dallas Syllabus for math6390.501.11s taught by Zalman Balanov (zxb105020)
- UT Dallas Syllabus for math2419.004.07s taught by Bentley Garrett (btg032000)
- UT Dallas Syllabus for math2418.001.09f taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for math2418.001.10f taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for phys3311.501.10s taught by Paul Macalevey (paulmac)
- UT Dallas Syllabus for ee6349.501.08s taught by Mohammad Saquib (saquib)
- UT Dallas Syllabus for math2333.502 06s taught by Paul Stanford (phs031000)
- UT Dallas Syllabus for ce2300.501.09s taught by Lakshman Tamil (laxman)
- UT Dallas Syllabus for math2418.001 05s taught by Paul Stanford (phs031000)
- tmp461A
- UT Dallas Syllabus for phys5401.501.08f taught by Paul Macalevey (paulmac)
- UT Dallas Syllabus for taught by William Pervin (pervin)
- UT Dallas Syllabus for phys3411.001.11s taught by Paul Mac Alevey (paulmac)
- UT Dallas Syllabus for math2418.501 05s taught by Paul Stanford (phs031000)

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading