# Using Mathematica for Analyzing Board

Games to Teach the Basics of Markov Chains

April 5, 2008
Central Connecticut State University
Roger Bilisoly, Ph.D.
Department of Mathematical Sciences
Central Connecticut State University
New Britain, Connecticut

Conditional Probability
• Let A, B be two events
• P(A | B) = Probability of A given that B has
happened.
• P(A | B) = P(A and B)/P(B)
– This is the proportion of B’s that are also A’s

• Example: Draw 5 cards
– P(5th card an Ace | first 4 cards are Aces) = 0

Roger Bilisoly 4-08

Independence

If P(A | B) = P(A), then knowing about B
makes no difference when computing the
probability of A
Example: Let A = roll of a die and B = the
result of a coin flip. Obviously we have:
– P(A | B) = P(A)
– P(B | A) = P(B)

This implies P(A and B) = P(A)*P(B)
Roger Bilisoly 4-08

Random Variables
• A random variable is a random number
generator
• For example
– Roll 2 dice, let X = sum of values
– Flip 10 coins, let X = # of tails

• Since random variables describe events,
they can be used in conditional
probabilities
Roger Bilisoly 4-08

Markov Chains
• This is a sequence of random variables
X1, X2, X3, … such that
– P(Xn | Xn-1, Xn-2, …, X1) = P(Xn | Xn-1)
– That is, given the immediately preceding
variable Xn-1, the rest of the variables X1, …,
Xn-2 do not add any further information. This
is an example of conditional independence.
– Note: This does not mean Xn is independent
of X1 or of X2 or of X3, …. In fact, Xn is usually
dependent with all of these.
Roger Bilisoly 4-08

Markov Chains - Continued
• X1, X2, X3, … could be a sequence of
events evolving over time, but this need
not be the case.
• Events evolving over time might be
modeled as a Markov chain, but the
random variables need not match the
times in a one to one fashion
– We will see an example of this with the game
Monopoly
Roger Bilisoly 4-08

Board Games: E.g., Monopoly

From http://www.worldofmonopoly.co.uk/history/images/bd-usa.jpg

Monopoly and Markov Chains
• Number the squares 1, 2, 3, …, 40, where
1=Go, 2=Mediterranean Avenue, etc.
• Let X1 = Position after first roll of dice, X2 =
Position after second roll, and so forth.
• Note that we have
– P(Xn | Xn-1, Xn-2, …, X1) = P(Xn | Xn-1)
– That is, the next position only requires
knowing the current position.
Roger Bilisoly 4-08

Transition Probability Matrix
• Key to understanding a Markov chain is
knowing the probabilities from any state to
any other state.
• For example, in Monopoly, what are the
probabilities from any square to any other
square?
– Moves are determined by dice, but there are
complications: doubles, Community Chest
and Chance cards, the Go to Jail square.
Roger Bilisoly 4-08

Simplest Board Game:
A Straight Line
• Start on the left. Goal is to get to the last square
on the right. Move by tossing a coin, H = move
1 to the right, T = move 2 to the right.
1.0
0.8
0.6
0.4
0

Square 1

2

4

6

8

10

Square 10

Square 2
Roger Bilisoly 4-08

Linear Game Example

P(1 to 2) = ½, P(1 to 3) = ½, P(1 to other) = 0
P(2 to 3) = ½, P(2 to 4) = ½, P(2 to other) = 0
P(3 to 4) = ½, P(3 to 5) = ½, P(3 to other) = 0

P(8 to 9) = ½, P(8 to 10) = ½, P(8 to other) = 0
1.0
0.8
0.6
0.4
0

2

4

6

Roger Bilisoly 4-08

8

10

Example, Continued
• P(9 to 10) = 1 is a possibility
• P(9 to 10) = ½, P(9 to 9) = ½ is another
possibility
– Both of these are used in actual games

Is there an even shorter way to summarize this
information? Yes, using matrices.
Roger Bilisoly 4-08

Matrix of Probabilities
Square 1
Square 2
Square 3

0

1
2

0 0

1
2
1
2

0 0 0

0 0 0 0 0 0 0
1
2
1
2

0 0 0 0

0 0 0 0 0 0
1
2
1
2

0 0 0 0 0

0 0 0 0 0
1
2
1
2

0 0 0 0
1
2
1
2

0 0 0
1
2
1
2

0 0

0 0 0 0 0 0 0 0

1
2

0 0 0 0 0 0

0 0 0 0 0 0 0

1
2
1
2

Rows represent current
position, columns
represent next position.
Last row indicates that
once a player gets to
the last square, he or
she stays there forever.

0

0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1
Square 1

Square 2

Roger Bilisoly 4-08

Transient vs. Recurrent States
• States are visited either a finite number of
times or infinitely often.
• Former are called transient, the latter
recurrent
• Example: Squares 1 through 9 are
transient, square 10 is recurrent.
1.0
0.8
0.6
0.4
0

2

4

6

Roger Bilisoly 4-08

8

10

Game with both
Transient and Recurrent States
• A linear game
attached to one or
more loops provides
an example of this.
– Example:

Start

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

P= 0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5 0

0

0

0

0

0

0

0

0

0

.5 .5

0

0

0

.5 0

0

0

0

0

0

.5

0

0

0

.5 .5 0

0

0

0

0

0

Roger Bilisoly 4-08

Transient vs. Recurrent
• Knowing the mean number of times a
transient state occurs is interesting.
• Knowing the probability of reaching a
recurrent state is interesting.
• Both of these questions can be easily

Roger Bilisoly 4-08

Transient States:
Mean Number of Visits
• Let P = matrix of transition probabilities.
• Let PT be the submatrix of P corresponding to
only the transient states.
0

1
2

0 0

1
2
1
2

0 0 0

0 0 0 0 0 0 0
1
2
1
2

0 0 0 0

P=

0 1
2

0 0 0 0 0 0
1
2
1
2

0 0 0 0 0

0 0

1
2
1
2

0 0 0 0 0 0
1
2
1
2

0 0 0 0 0

0 0 0

1
2
1
2

0 0 0 0

0 0 0 0
1
2
1
2

0 0 0

PT =

0 0 0 0 0
1
2
1
2

0 0 0 0 0

0 0 0 0
1
2
1
2

0 0 0
1
2
1
2

0 0

1
2
1
2

1
2
1
2

1
2
1
2

0 0

0 0 0 0 0 0

0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

1
2
1
2

1
2

0 0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1
Roger Bilisoly 4-08

0

0 0 0 0 0 0 0 0 0

Mean # of Visits
(Using Mathematica)
• S = (I – PT)-1 gives mean number of visits
On average player moves 1.5
squares, so mean time on a square
starting at any state.
converges to 1/1.5 = 2/3.
1.
0.
0.
0.
0.
0.
0.
0.
0.

0.5
1.
0.
0.
0.
0.
0.
0.
0.

0.75
0.5
1.
0.
0.
0.
0.
0.
0.

0.625
0.75
0.5
1.
0.
0.
0.
0.
0.

0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.

0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.

0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.

0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.

0.667969
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.

First row gives results starting at first square. For example, entry (1,1)
= 1 means 1 visit on average (player starts on first square and must
move forward one or two squares, so exactly one visit is guaranteed).
Entry (1,2) = 0.5 since 50% chance of going one square to the right.
Roger Bilisoly 4-08

Gambler’s Ruin
when \$0 or \$max is reached.
• This is a board game where winning moves a
square to the right, losing a square to the left.
• Now there are two recurrent states: one for \$0
and one for \$max.
• Two questions:
– What is the probability of reaching \$0 and \$max?
– What is the mean number of visits to the other states?
Roger Bilisoly 4-08

Gambler’s Ruin for \$0 and \$10
(Fair payoffs)

S=(I – PT)-1 =

0

1
2

0 0 0 0 0 0 0

1
2

0

1
2

0 0 0 0 0 0

0

1
2

0

1
2

0 0 0 0 0

0 0

1
2

0

1
2

0 0 0 0

PT = 0 0 0

1
2

0

1
2

0 0 0

0 0 0 0

1
2

0

1
2

0 0

0 0 0 0 0

1
2

0

1
2

0

0 0 0 0 0 0

1
2

0

1
2

0 0 0 0 0 0 0

1
2

0

1.8
1.6
1.4
1.2
1.
0.8
0.6
0.4
0.2

1.6
3.2
2.8
2.4
2.
1.6
1.2
0.8
0.4

1.4
2.8
4.2
3.6
3.
2.4
1.8
1.2
0.6

1.2
2.4
3.6
4.8
4.
3.2
2.4
1.6
0.8

1.
2.
3.
4.
5.
4.
3.
2.
1.

0.8
1.6
2.4
3.2
4.
4.8
3.6
2.4
1.2

0.6
1.2
1.8
2.4
3.
3.6
4.2
2.8
1.4

0.4
0.8
1.2
1.6
2.
2.4
2.8
3.2
1.6

Here P(Win) =
P(Lose) = ½

0.2
0.4
0.6
0.8
1.
1.2
1.4
1.6
1.8

Roger Bilisoly 4-08

Gambler’s Ruin:
Recurrent State Probabilities
• Let R = matrix of transition probabilities from
transient to recurrent states.
• Let F = matrix of eventual probabilities of
reaching recurrent states from transient states.
• In general
9
PT)-1

– F = SR = (I –
– Example at right

R

R=

Roger Bilisoly 4-08

1
2

0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
1
2

F=

10
4
5
7
10
3
5
1
2
2
5
3
10
1
5
1
10

1
10
1
5
3
10
2
5
1
2
3
5
7
10
4
5
9
10

Gambler’s Ruin for U.S. Roulette
1

0
0

0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

10
19

0

9
19

0

0

0

0

0

0

0

0

0

0

1

10
19

9
19

0.940518
0.874426
0.800992
0.719397
0.628737
0.528003
0.416077
0.291715
0.153534

P matrix

0.0594822
0.125574
0.199008
0.280603
0.371263
0.471997
0.583923
0.708285
0.846466

F matrix
Note bias for reaching \$0
Roger Bilisoly 4-08

Simplified Version of
0 1
6

Game is 20 squares long.
Must land on last square to win.
Square 7 goes to 14, Square 17
goes to 3. Expected number of
times per square given below.

0 0

1
6
1
6

0 0 0

1
6
1
6
1
6

0 0 0 0

1
6
1
6
1
6
1
6

0 0 0 0 0

1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0 0
0 1 0 0 0 0 0
0
0
0

0 0 0 0 0 0 0

6
1
6
1
6
1
6
1
6

1
6
1
6
1
6
1
6

1
6
1
6
1
6

6
1
6

3.0

0 0 0 0 0 0 0 0 0

2.5

0 0 0 0 0 0 0 0 0 0
0 0

2.0

0 0

1.5

0 0
0 0

1.0

0 0
0.5

0 0

0.0
0

5

10

15

Square #

0 0 0
1
6
1
6

0 0
1
6

0

0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0
6

Expected #

0 0 0 0

1
6
1
6
1
6
1
6
1
6
1
6

1
6
1
6
1
6
1
6
1
6
1
6

6
1
6
1
6

0 0 0 0 0 0 0 0

6
1
6
1
6
1
6

0 0 0 0 0 0 0 0 0

6
1
6
1
6
1
6
1
6

0 0 0 0 0 0 0 0 0 0

6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0

1
6
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
1
3

0 0 0 0
0 0 0 0
0 1 0 0
6

0 1
0
0
0

6
1
6
1
6
1
6

1
6
1
6
1
6
1
6

1
6
1
6
1
6

0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 6 6
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 1
6

Roger Bilisoly 4-08

0

6

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

Full Version of Chutes and Ladders
• This makes a good project, or a good
example to discuss in class.
• “Using Games to Teach Markov Chains”
by Roger Johnson.
– See: http://findarticles.com/p/articles/mi_qa3997/is_200312/ai_n9338086

Roger Bilisoly 4-08

Looping Board Games
• In games that loop, the states are usually
recurrent.
• Hence, long term probabilities are of interest.
4

These probabilities are the
eigenvalues of P, the transition
probability matrix. This is easily
done with Mathematica.

3

2

1

Roger Bilisoly 4-08

1

2

3

4

Example 1:
12 Square Loop
• Use one die for movement and the 12
square loop of last slide. We have:
0

Eigenvalues all equal 1/12.
This P has columns that also add
to one, which is a doubly
stochastic matrix. Such a matrix
of size n by n has all eigenvalues
equal to 1/n.
Using the same randomization
device for moving from each
square results in a doubly
stochastic matrix.
Roger Bilisoly 4-08

1
6

0 0

1
6
1
6

0 0 0

1
6
1
6
1
6

0 0 0 0

1
6
1
6
1
6
1
6

0 0 0 0 0

P=

1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6

1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6
1
6
1
6
1
6

0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6
1
6
1
6

0 0 0
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6
1
6

0 0
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0
1
6

0
1
6
1
6
1
6
1
6
1
6
1
6

0 0 0 0 0 0

Example 2:
12 Square Loop

P=

0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1

0.1
0
0
0
0
0
0
0
0.3
0.5
0
0.2

0.2
0.1
0
0
0
0
0
0
0
0.3
0.5
0

0
0.2
0.1
0
0
0
0
0
0
0
0.3
0.5

0.5
0
0.2
0.1
0
0
0
0
0
0
0
0.3

0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0

0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0

0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0

0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0

0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0

0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0

0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0

P(Advance 5) = 0.3 for every square. Again this is doubly stochastic,
hence the limiting probabilities are 1/12.
Roger Bilisoly 4-08

Periodicity
• Let Pii = P(state i to state i)
• Let Pnii = P(state i to state i in n moves)
• If Pnii = 0 when d does not divide n (and d
is the largest such integer), then the
Markov chain has period d.
• Example: For the 12 square loop, let
P(Advance 1) = P(Move back 1) = ½,
which is a random walk on the loop. This
has periodicity = 2 (see next slide).
Roger Bilisoly 4-08

Example of Periodicity:
Random Walk on Loop
• A player can only
return back to any
square after an even
number of moves.
• Limiting probabilities
are 1/6 for “even
squares” after an odd
number of moves, and
1/6 for “odd squares”
after an even number
of moves.

P=

0

1
2

0 0 0 0 0 0 0 0 0

1
2

0

1
2

0 0 0 0 0 0 0 0 0

0

1
2

0

1
2

0 0 0 0 0 0 0 0

0 0

1
2

0

1
2

0 0 0 0 0 0 0

0 0 0

1
2

0

1
2

0 0 0 0 0 0

0 0 0 0

1
2

0

1
2

0 0 0 0 0

0 0 0 0 0

1
2

0

1
2

0 0 0 0

0 0 0 0 0 0

1
2

0

1
2

0 0 0

0 0 0 0 0 0 0

1
2

0

1
2

0 0

0 0 0 0 0 0 0 0

1
2

0

1
2

0

0 0 0 0 0 0 0 0 0

1
2

0

1
2

0 0 0 0 0 0 0 0 0

1
2

0

1
2

Roger Bilisoly 4-08

1
2

Different Limiting Probabilities Require
Heterogeneous Moves!
• If movement were solely dependent on the
roll of two dice, then the limiting
probabilities are all 1/40 (since there are
40 squares in a monopoly board).
• For example, in Monopoly, there is a “Go
to Jail” square, and some Chance and
Community Chest cards direct the player
to go to certain squares. These changes
alter the limiting probabilities.
Roger Bilisoly 4-08

Board with “Go to Jail” Square
(Moves based on sum of 2 dice)
Now the limiting probabilities
are unequal. Note that the
“Go to Jail” has 0 probability
since the move never ends
on that square.

Move Clockwise

Jail

Besides Jail, the most likely
square is 7 squares past Jail
(with probability = 0.0280).

Go to Jail

Start
{0.0229,0.0231,0.0233,0.0236,0.0232,0.023,0.0229,0.0229,0.023,0.0231,
0.05,0.0231,0.0239,0.0246,0.0253,0.0261,0.027,0.028,0.0276,0.0273,
0.0271,0.0269,0.0267,0.0264,0.0268,0.027,0.0271,0.0271,0.027,0.0269,
0,0.0269,0.0261,0.0254,0.0247,0.0239,0.023,0.022,0.0224,0.0227}
Roger Bilisoly 4-08

Approximate Monopoly
• Ignore that three doubles in a row means going to jail.
• There are 10 (of 16) Chance cards that relocate the
RR, Go to Boardwalk, Go back 3 spaces
• There are 2 (of 16) Community Chest cards that relocate
the player: Advance to Go, Go to Jail.
• Assume that each card has equal chance of appearing
(equivalent to shuffling the cards each time a player
lands on Chance or Community Chest).
• Assume players immediately pay to get out of jail.
– The best strategy for the beginning of the game
Roger Bilisoly 4-08

P=

7
576
5
576
1
144
1
192
1
288
1
288
1
288
1
192
1
144
5
576
7
576
7
576
7
576
7
576
7
576
7
576
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
9
1024
19
1536
49
3072
37
2304
45
1024
331
4608
901
9216
95
768
1379
9216
817
4608
153
1024
71
576
55
576
13
192
23
576
7
576

0

7
288

1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3
32
37
576
5
144
1
192
1
288
1
576

35
288
53
576
1
16
19
576
1
288
1
576

5
36
1
9
1
12
1
18
1
36

0

1
16
5
96
1
24
1
32
1
48
1
96

0

0

0

0

0

0

0

0

0

0

0

0

0

0

5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

7
288
7
144
7
96
7
72
35
288
7
48
35
288
7
72
7
96
7
144

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12

0
0
1
36
1
18
1
12
65
576
41
288
11
64
7
48
23
192

1
192
1
96
1
64
1
48
5
192
1
32
5
192
1
48
1
64
11
288
35
576
49
576
11
96
83
576
25
144
85
576

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0
1
36
1
18
1
12
1
9
5
36
1
6

0

0

0

0

0

0

1
96
1
48
1
32
1
24
5
96

0
1
36
1
18
1
12
1
9

0
0
1
36
1
18
1
12

55
576
23
192
7
48
11
64
41
288
11
96
25
288
35
576
5
144
5
576
7
576
7
576
7
576
7
576
7
576
7
576
5
576
1
144
19
576
17
288
49
576
65
576
41
288
11
64
1361
9216
569
4608
305
3072
55
768
45
1024
25
1536
133
9216
29
2304
11
1024
49
4608
97
9216
7
576
7
576
7
576
23
576
13
192

19
288
53
576
17
144
83
576
49
288
9
64
1
9
1
12
1
18
1
36
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576

0

11
288
37
576
13
144
67
576
41
288
97
576
5
36
1
9
1
12
1
18
1
36

0

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

1
48
5
288
1
72
11
288
1
16
25
288
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

1
96
5
576
1
144
1
192
1
288
1
576

0
0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

7
288
7
144
7
96
7
72
35
288
7
48
35
288
7
72
7
96
7
144
7
288

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1
36
1
18
1
12
65
576
41
288
11
64
7
48
23
192
3
32
37
576
5
144
1
192
1
288
1
576

0

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

1
96
1
48
1
32
1
24
5
96
1
16
5
96
1
24
1
32
1
48
1
96

0

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1
288
1
144
1
96
1
72
5
288

0

0

0

1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
5
576

0

0

0

1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
7
192

1
96
5
576
1
144
1
192
1
288
1
576

1
576
1
288
19
576
1
16
53
576
35
288
85
576
25
144
83
576
11
96
49
576
1
18
1
36

0
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
5
576

1
288
1
144
1
96
1
24
7
96
5
48
37
288
11
72
17
96
7
48
11
96
1
12
1
18
1
36

0

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

1
576
1
288
1
192
1
144
5
576
1
96
7
192
1
16
17
192
11
96
9
64
1
6
5
36
1
9
1
12
1
18
1
36

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0 0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0
0
0
0
0
0
0

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0
0

7
288
7
144
7
96
455
4608
287
2304
77
512
49
384
161
1536
21
256
259
4608
35
1152
7
1536
7
2304
7
4608

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576

0

0

0

0

0

0

0

0

0

0

0

1
96
1
48
1
32
1
24
5
96
1
16
5
96
1
24
1
32
1
48
1
96

1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

1
576
1
288
1
192
5
144
37
576
3
32
23
192
7
48
11
64
41
288
65
576
49
576
17
288
19
576
1
144
5
576

Limiting Probabilities of
Approximate Monopoly
Limiting probabilities given below
and plotted to the right (rescaled).

Jail

The results below are close to
values found empirically, e.g.,
Truman Collins’ results based on
32 billion rolls of the dice
(differences are less than 1%).
Roger Bilisoly 4-08

Illinois Avenue

Start

{0.03114,0.02152,0.019,0.02186,0.02351,0.02993,0.02285,0.00876,0.02347,0.02331,
0.05896,0.02736,0.02627,0.02386,0.02467,0.02919,0.02777,0.02572,0.02917,0.03071,
0.02875,0.0283,0.01048,0.02739,0.03188,0.03064,0.02707,0.02679,0.02811,0.02591,
0,0.02687,0.02634,0.02377,0.0251,0.02446,0.00872,0.02202,0.02193,0.02647}
http://www.tkcs-collins.com/truman/monopoly/monopoly.shtml

Three Doubles in a Row
Puts You in Jail
• This can be handled by using three states for each square. For
example, let 1, 2 and 3 stand for Go.
– 1 means no prior doubles,
– 2 means one prior double and
– 3 means 2 prior doubles.

• If a player is in state 3 and rolls a double, then he or she goes to jail.
States 1 and 2 have the same probabilities as computed earlier.
• This has been done, e.g., “Monopoly as a Markov Process” by
Robert Ash and Richard Bishop, Mathematics Magazine, 45(1),
Jan., 1972, 26-29.
• Alternative method: The probability of 3 doubles in a row is 1/216,
which can be used to approximate this happening. This is done in
“Take a Walk on the Boardwalk” by Stephen Abbott and Matt
Richey, The College Mathematics Journal, 28(3), May, 1997, 162171.

Roger Bilisoly 4-08

Mathematica: Create P
and Compute S
tranMatrixLinear1[n_,probs_]:=Module[{p={},r,i},
Do[AppendTo[p,{Table[0,{i,1,r-1}],
probs[[1;;Min[n-r+1,Length[probs] ] ]],
{Table[0,{i,r+Length[probs],n}]}}//Flatten],
{r,1,n}]
Do[p[[r,n]] = 1 - Fold[Plus,0,p[[r,1;;n-1]] ],{r,1,n}];
Return[p]
]
n=10;
p=tranMatrixLinear1[n,{0,1,1}/2];
pt=p[[1;;n-1,1;;n-1]];
s=Inverse[IdentityMatrix[n-1]-pt];
s//MatrixForm//N

Roger Bilisoly 4-08

1.
0.
0.
0.
0.
0.
0.
0.
0.

0.5
1.
0.
0.
0.
0.
0.
0.
0.

0.75
0.5
1.
0.
0.
0.
0.
0.
0.

0.625
0.75
0.5
1.
0.
0.
0.
0.
0.

0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.

0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.

0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.

0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.

0.667969
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.

Mathematica: Create and
Compute Limiting Probabilities
tranMatrixLoop1[n_,probs_]:=
Module[{c,r,p={},row},
row={probs,{Table[0,{i,Length[probs]+1,n}]}}//Flatten;
Do[row=RotateRight[row];
AppendTo[p,row],{i,1,n}];
Return[p]
]
n=40; limit=n/4+1;
p=tranMatrixLoop1[n,{0,1,2,3,4,5,6,5,4,3,2,1}/36];
p[[All,11]]+=p[[All,31]]; (* Go to Jail *)
p[[All,31]]=Table[0,{40}];
out=Eigensystem[p//Transpose//N];
pi=out[[2,1]]/Fold[Plus,0,out[[2,1]]];
Round[pi,0.0001]
{0.0229,0.0231,0.0233,0.0236,0.0232,0.023,0.0229,0.0229,0.023,0.0231,
0.05,0.0231,0.0239,0.0246,0.0253,0.0261,0.027,0.028,0.0276,0.0273,
0.0271,0.0269,0.0267,0.0264,0.0268,0.027,0.0271,0.0271,0.027,0.0269,
0,0.0269,0.0261,0.0254,0.0247,0.0239,0.023,0.022,0.0224,0.0227}
Roger Bilisoly 4-08

Mathematica: Graphics
piMatrix = Table[Table[1,{limit}],{limit}];
piMatrix[[1]] = pi[[1;;limit]];
Do[piMatrix[[i,1]] = pi[[n-i+2]];
piMatrix[[i,limit]] = pi[[limit+i-1]],
{i,2,limit-1}]
piMatrix[[limit]] = pi[[n/2+1;;3 n/4+1]]//Reverse;
g1 = Graphics[Raster[piMatrix //Transpose] ];
g2 = ListLinePlot[gridLoop[n],
PlotStyle->Directive[Black,Thick],AspectRatio->1/n];
Show[g1,g2]

Raster[] takes a matrix and
makes a 2D grid. However, the
resulting image is flipped about the
x-axis. This is reflected in the
construction of piMatrix.

Roger Bilisoly 4-08

Abbott, Steve and Matt Richey. 1997. Take a Walk on the Boardwalk. The College Mathematics
Journal. 28(3): 162-171.
Althoen, S. C., L. King, and K. Schilling. 1993. How Long is a Game of Snakes and Ladders'?
Mathematical Gazette. 77: 71-76.
Ash, Robert and Richard Bishop. 1972. Monopoly as a Markov Process. Mathematics Magazine.
45: 26-29.
Bewersdorff, Jörg, 2005. Luck, Logic, and White Lies: The Mathematics of Games. A. K. Peters,
Ltd. Chapter 16 analyzes Monopoly with Markov Chains.
Diaconis, Persi and Rick Durrett, 2000. Chutes and Ladders in Markov Chains, Technical Report
2000-20, Department of Statistics, Stanford University.
Dirks, Robert. 1999. Hi Ho! Cherry-0, Markov Chains, and Mathematica. Stats. Spring (25): 2327.
Johnson, Roger W. 2003. Using Games to Teach Markov Chains. Primus.
http://findarticles.com/p/articles/mi_qa3997/is_200312/ai_n9338086/print
Gadbois, Steve. 1993. Mr. Markov Plays Chutes and Ladders. The UMAP Journal. 14(1): 31-38.
Murrell, Paul. 1999. The Statistics of Monopoly. Chance. 12(4): 36-40.
Stewart, Ian. 1996. How Fair is Monopoly! Scientific American. 274(4): 104-105.
Stewart, Ian. 1996. Monopoly Revisited. Scientific American. 275(4): 116-119.
Tan, Baris,. 1997. Markov Chains and the Risk Board Game. Mathematics Magazine. 70(5):
349-357.
Roger Bilisoly 4-08