You are on page 1of 6

Introduction to Game Theory Date: 15th Feb 2019

Instructor: Sujit Prakash Gujar Scribes: Harish Kumar Datla


Rishabh Chandra

Lecture 11: Precursor to Lemke-Howson Algorithm


1 Recap
• Two Player Non Zero-Sum Games

Non-zero-sum games are situations in which the aggregate of the utilities of interacting parties
can be less or more than zero. Also called the Bi-Matrix game, this can be either competitive or
non - competitive. Solving for two-player non-zero-sum games is called Linear complementarity
problem (LCP)
• Complexity

Lemke-Howson used LCP to solve Bi-Matrix games and its time complexity is the worst case
exponential. Papadimitriou showed Nash is PPAD complete[1].

• Better Patrolling with Game Theory

For scheduling randomized patrols for fare inspection in transit systems, we framed it as
an optimization problem, where we maximize the fares collected under various constraints.

2 Motivation for Lemke Howson Algorithm


We know that two-player zero-sum games have polynomial time algorithms, while two-player nonzero-
sum games are PPAD Complete. The reason is that the number of supports in two player nonzero-
sum games are exponential.

Given a Bi-Matrix game (A, B) with m × n payoff matrices A and B, a mixed strategy for player
1 is a vector x ∈ Rm with non-negative components that sum to 1. For player 2, a mixed strategy is
a vector y ∈ Rn .

3 Analysis of Two Player Non-Zero Sum Game


The support of a mixed strategy is the set of pure strategies that have positive probability(or non-
zero probabilities). The best response to y is a mixed strategy x that maximizes the expected payoff
xT Ay, and vice-versa. A Nash equilibrium is a pair of mutual best responses[2].
• Some Assumptions

Positive Matrices: Both A and B matrices are positive. Even if they are not positive we can
add some constant to the matrices and make them positive and then calculate the equilibrium,
this does not change the Nash equilibrium profile.
Nondegeneracy assumption: A Bi-Matrix game is nondegenerate if the number of pure best
responses to any mixed strategy never exceed the size of its support.

We know that the Nash equilibrium must exist for two-player non-zero-sum games. Let w1 be
the expected payoff of player A and w2 be the expected payoff of player B at Nash equilibrium. The
linear inequality constraints involving expected payoffs can be written as follows[2].

1
  w 
b11 b12 ... b1n 2
 . w2 
 . ... . ≤ 
x1 x2 .. xm 

 . . ... .   ... 

bm1 bm2 ... bmn w2

Similarly, for player A we can write linear constraints as below

  x   
w1
a11 a12 ... a1n m+1
 .  xm+2  w1 
. ... . 

≤ 
  
.   ...   ... 

 . . ...

am1 am2 ... amn xm+n w1

Since we know that probabilities are always positive for player B we have

   
xm+1 0
 xm+2  0
 ..  ≥  .. 
   
 .  .
xm+n 0
Similarly for player A
   
x1 0
 x2  0
 ..  ≥  .. 
   
 .  .
xm 0
We also know that the sum of probabilities is 1. Therefore the following equations hold
m
X
xi = 1
i=1

m+n
X
xi = 1
i=m+1

We can expand the above linear inequality constraints as follows.

a11 xm+1 + a12 xm+2 + ..... + a1n xm+n ≤ w1

a21 xm+1 + a22 xm+2 + ..... + a2n xm+n ≤ w1


..
.
am1 xm+1 + am2 xm+2 + ..... + amn xm+n ≤ w1

We can simplify the above equations by assuming w1 is positive and eliminating the payoff variable
w1 . We do this by dividing all inequalities both sides by w1 . Hence we get

2
a11 xm+1 /w1 + a12 xm+2 /w1 + ..... + a1n xm+n /w1 ≤ 1
a21 xm+1 /w1 + a22 xm+2 /w1 + ..... + a2n xm+n /w1 ≤ 1
..
.
am1 xm+1 /w1 + am2 xm+2 /w1 + ..... + amn xm+n /w1 ≤ 1
For the sake of simplicity we treat each xi = xi /w1 as new variable. So we have following equations,
We convert the above x variables back to probability by dividing each one with their sum.

a11 xm+1 + a12 xm+2 + ..... + a1n xm+n ≤ 1

a21 xm+1 + a22 xm+2 + ..... + a2n xm+n ≤ 1


..
.
am1 xm+1 + am2 xm+2 + ..... + amn xm+n ≤ 1

Similarly, for player B, we have following inequalities

b11 x1 + b21 x2 + ..... + bm1 xm ≤ 1


b12 x1 + b22 x2 + ..... + bm2 xm ≤ 1
..
.
b1n x1 + b2n x2 + ..... + bmn xm ≤ 1

The expected payoff w’s are normalized to 1, Hence the below constraints can be dropped.
m
X
xi = 1
i=1

m+n
X
xi = 1
i=m+1

4 The Concept of Tight Constraint and Evaluation of Nash


Equilibrium
We label the equations in pairs as shown below, this is because one of the pair of equations should
hold equality because if x > 0, then it is in support, which means it should get the highest utility,
which means equality of the corresponding equation in pair. Similarly if x = 0 then this constraint
is tight anyway.

If an inequality constraint happens to hold with equality, but it’s not binding then it’s said to
be a tight constraint. Either constraint in each of m + n pairs of equations should be tight. The
constraints which are tight have an intersection point.

3
A B
x1 ≥ 0 ---1--- a11 xm+1 + a12 xm+2 + ..... + a1n xm+n ≤ 1
x2 ≥ 0 ---2--- a21 xm+1 + a22 xm+2 + ..... + a2n xm+n ≤ 1

..
.
xm ≥ 0 ---m--- am1 xm+1 + am2 xm+2 + ..... + amn xm+n ≤ 1

b11 x1 + b21 x2 + ..... + bm1 xm ≤ 1 - - - m+1 - - - xm+1 ≥ 0

b12 x1 + b22 x2 + ..... + bm2 xm ≤ 1 - - - m+2 - - - xm+2 ≥ 0

..
.
b1n x1 + b2n x2 + ..... + bmn xm ≤ 1 - - - m+n - - - xm+n ≥ 0

Except for the origin pair every other vertex might represent an equilibrium. At the origin, you
can’t normalize to get the probabilities. Out of many possible vertices, we identify the equilibrium
point with the help of the following theorem.

Theorem 1.1: For non-degenerate positive Bi-Matrix games if (v, w) 6= (0¯p , 0¯q ) and L(v, w) =
{1, 2, ...., m + n} iff (v,w) represents a Nash equilibrium.

L(v, w)is defined as following

For row player polygon P and column player polygon Q we have L(v, w) = L(v) ∪ L (w) where
v ∈ P and w ∈ Q[4].

5 Example 1.1
We will now see an example to demonstrate the above analysis. Consider Table 1 given below which
is the payoff matrix for a two-player non-zero-sum game. We assume that player A will play with
probability p and player B will play with probability q.

Player B
q 1−q
p (2, 0) (0, 6)
Player A
1−p (0, 2) (6, 0)

Table 1: Payoff Matrix

We compute the Nash equilibrium by solving the following equations.

p × 6 = (1 − p) × 2
q × 2 = (1 − q) × 6

By solving the above we get

p = 1/4 and q = 3/4

4
So mixed strategy Nash equilibrium is

σ = ((1/4, 3/4), (3/4, 1/4))

A B
x1 ≥ 0 ---1--- 2 · x3 + 0 · x4 ≤ 1

x2 ≥ 0 ---2--- 0 · x3 + 6 · x4 ≤ 1

0 · x1 + 2 · x2 ≤ 1 ---3--- x3 ≥ 0

6 · x1 + 0 · x2 ≤ 1 ---4--- x4 ≥ 0

We will plot the respective constraints of players A, B in their respective coordinate spaces. Below
are the graphs. As explained above constraints which are tight have an intersection point. So we
mark the respective intersection points for players A, B with the respective constraints that are tight
at each point.
For example, for point B the constraints which are tight are x3 ≥ 0 which has label 3 and the
equation 0 · x3 + 6 · x4 ≤ 1 which has label 2. No other equations of other labels are tight at point
B so we label point B as B(0,2,3,0). Similarly, we get the following labels for player B

A(0,0,3,4)
B(0,2,3,0)
C(1,2,0,0)
D(1,0,0,4)

x4 axis
1

.75

.5

.25
B(0, 1/6) C(1/2, 1/6)

A(0, 0) .25D(1/2, 0).75 x3 axis


1

Figure 1: Player B

5
we get the following labels for player A

E(1,2,0,0)
F(1,0,3,0)
G(0,0,3,4)
H(0,2,0,4)

x2 axis
1

.75

F (0, 1/2) G(1/6, 1/2)

.25

E(0, 0)
H(1/6, 0) .5 x1 axis
.75 1

Figure 2: Player A

According to Theorem 1.1, a pair (x, y) is a Nash equilibrium iff (x, y) is completely labelled. From
Figure 1 and Figure 2, points G, C constitute all the labels. Hence (G, C) is an equilibrium point.
So ((1/6,1/2),(1/2,1/6)) is the equilibrium point. Normalizing these coordinates we get respective
probabilities ((1/4,3/4),(3/4,1/4)). This is Nash equilibrium which matches the previously computed
result.This method for calculating Nash equilibrium is known as vertex enumeration[3].

References
[1] Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of
computing a nash equilibrium. SIAM Journal on Computing, 39(1):195–259, 2009.
[2] C. E. Lemke, J. T. Howson, Jr, Lemke, and Howson. Equilibrium points of bimatrix games.
SIAM Journal on Applied Mathematics, 12(2):413–423, 1964.

[3] N. Nisan et al., editors. Algorithmic game theory, volume Vol. 1. Cambridge University Press,
Cambridge, 2007.
[4] Lloyd S Shapley. A note on the lemke-howson algorithm. In Pivoting and Extension, pages
175–189. Springer, 1974.

You might also like