Introduction Definitions Solution concepts for static games Applications of static games

Examples of games

Introduction
Game theory: Models “strategic interaction”: Several players can choose different actions payoffs depend both on a player’s own actions and on the actions of his “opponents” (other players) ⇒ Players need a “model” of how other players will act. This model should be as consistent as possible with what other players actually do. Overview Static games/dynamic games Complete information/incomplete information
Mattias Polborn Static games of complete information

Introduction Definitions Solution concepts for static games Applications of static games

Examples of games

Static games (Normal-Form games)
Example Prisoner’s dilemma. Quiet Fink Applications Prisoners confessing to police Working on a joint project Price competition in duopoly Quiet 2, 2 3, 0 Fink 0, 3 1, 1

Mattias Polborn

Static games of complete information

Introduction Definitions Solution concepts for static games Applications of static games

Examples of games

Battle of Sexes

Example Battle of Sexes Player 2 B S 1, 2 0, 0 0, 0 2, 1

Player 1

B S

Applications: Battle of Sexes Coordinating on a standard

Mattias Polborn

Static games of complete information

Introduction Definitions Solution concepts for static games Applications of static games

Examples of games

Stag hunt game
Example Stag hunt game Player 2 Stag Hare (5,5) (0,2) (2,0) (1,1)

Player 1

Stag Hare

Applications: Stag hunt Coordinating on a standard “Security dilemma”

Mattias Polborn

Static games of complete information

-1) (-1.1) (1.g.-1) Player 1 head tails Applications: Military Penalty shoot-outs (e..1) (-1.Introduction Definitions Solution concepts for static games Applications of static games Examples of games Matching pennies Example Matching pennies. soccer) Mattias Polborn Static games of complete information . Player 2 head tails (1.

The person who is nearest to 2/3 of the median of all numbers gets $5. The person who is nearest to 2/3 of the median of all numbers gets $5. Variation B: Every female student can choose an integer between 0 and 50. Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Examples of games 2/3 game Example 2/3 game. Variation A: Every student chooses an integer between 0 and 100. every male student can choose an integer between 0 and 100.

. . . . . . with typical elements a1 .Introduction Definitions Solution concepts for static games Applications of static games Definitions Definition A static game in normal-form consists of the following: 1 2 The set of players {1. . an . . . . . . . an ) 3 Mattias Polborn Static games of complete information . . . . the players’ payoff functions u1 (a1 . . . an ). un (a1 . . . . . . . n} the players’ strategy spaces A1 . An .

Payoffs are given in “utils” ⇒ If there is uncertainty over which payoffs are achieved.e.Introduction Definitions Solution concepts for static games Applications of static games Definitions Remarks: 1 The game is called static because players “move” (essentially) simultaneously. Complete information means that the structure of the game (the players. i. as they move. they cannot observe the other players’ moves.e. all players’ available strategies and payoff functions are common knowledge) It does not mean that player know perfectly how their opponent will move Mattias Polborn Static games of complete information 2 3 . players are considering the expected value (i.. expected utility).

Mattias Polborn Static games of complete information . Dominance arguments impose less structure on beliefs about other players’ behavior.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Solution concepts – overview How are games solved? Nash equilibrium requires that players have models about other players’ behavior. and that all these models are correct.

ai+1 . . . . . an ) strategy space of player i strategy space of i s opponents player i’s payoff function: ui : A → R Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Dominant and dominated strategies ai a a−i Ai A−i ui strategy of player i combination of strategies: s = (a1 . . . . an ) strategies of i’s opponents: a−i = (a1 . . . . ai−1 . . .

Strategy ai is strictly dominated by strategy ai if for all a−i ∈ A−i the following holds: ui (ai . the “<” is replaced by “≤”. In the definition of a weakly dominated strategy. Mattias Polborn Static games of complete information . a−i ) The strictly dominated strategy ai is a bad choice for player i. it would be better to choose ai . Independent of the other players strategies. a−i ) < ui (ai .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Dominant and dominated strategies Definition (Dominated strategy) Let ai and ai be feasible strategies for player i.

a−i ) > ui (ai . a−i ) If player i has a strictly dominant strategy ai . a weakly dominant strategy is defined by replacing “>” by “≥”. Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Dominant and dominated strategies Definition (Dominant strategy) The strategy ai is called a strongly dominant strategy if the following inequality holds for all feasible other strategies ai = ai and for all strategies a−i of i’s opponents: ui (ai . Again. it is quite clear that he should play that strategy.

we can hence eliminate the D row.2) (0. Given the remaining two cells.0) (0. no rational player 2 will ever choose R. ⇒ (U. player 2 will choose M.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Iterated elimination Left (1. This procedure is called iterated elimination of strictly dominated strategies (IESDS). Mattias Polborn Static games of complete information . Therefore.0) Player 1 Up Down R is strictly dominated by M. one can “eliminate” R from the table. D is dominated by U for player 1: no rational player 1 will ever choose D.3) Player 2 Middle Right (1.M) will be played in equilibrium.1) (0.1) (2. After that.

3) Player 2 Middle Right (1. even if you doubt that player 1 is really smart.1) (2.1) (0. The next elimination is more problematic. D is not dominated by U!). but if player 1 suspects that player 2 could be irrational.0) (0. would it be wise to believe that player 2 eliminated R and hence to proceed and eliminate D? Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Iterated elimination – Remarks The first round of elimination only requires that player 2 is rational: You could always recommend player 2 to eliminate that strategy. it requires that the first elimination already took place (note that in the original game. Consider the following variation of the game: Left (1.0) Player 1 Up Down IESDS predicts the same equilibrium.2) (-1000.

0) (2.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Iterated elimination – Remarks One can show: The sequence in which strictly dominated strategies are eliminated does not matter. IESDS does not always produce a unique equilibrium. Consider this game: Player 2 Left Middle (1.0) (0. Mattias Polborn Static games of complete information . both strategies of both players survive IESDS.1) Player 1 Up Down In this game.2) (0.

Mattias Polborn Static games of complete information . there are many NE. dominance arguments do not yield a prediction regarding the outcome of the game.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Nash Equilibrium In many cases. → Nash equilibrium (NE) Advantage: (Almost) every game has a NE. Disadvantage: Often.

Any reasonable agreement must be self-enforcing: No player can increase his own utility by deviating from his part of the agreement.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Nash Equilibrium No player should have an alternative strategy that gives him a higher payoff than his equilibrium strategy. but they cannot write binding contracts: If a player changes his mind. Suppose the players come together for a pre-play negotiation. after the moves are chosen. a player is given the following opportunity: He sees the moves of his opponents and can choose whether to change his move or not. (No regret about one’s action). if everyone else sticks to the agreement. he need not stick to the agreement. Suppose. Mattias Polborn Static games of complete information . they talk about what strategies they should play in the game. If no player would (strictly) benefit from this option. keeping other players’ strategies fixed. then we have a NE.

That is. ai ∈ Ai . . an ) is a Nash equilibrium (NE) if ∗ ∗ ui (a−i . . ai ) for all of i’s feasible strategies. ai∗ ) ≥ ui (a−i . ai∗ solves ai ∈Ai ∗ max ui (a−i . . . ai ) Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Nash Equilibrium Definition (Nash Equilibrium) ∗ ∗ The strategy combination a∗ = (a1 .

2) Player 1 Hawk Dove Mattias Polborn Static games of complete information .1) (2.3) 2 Dove (3.2) (0.1) Player 1 Up Down Game: “chicken”: Player Hawk (-10.0) (2.-10) (1.0) (0.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Nash Equilibrium A game can have multiple Nash equilibria: Player 2 Left Middle (1.

-1) (-1.1) (1.1) (-1.-1) Player 1 head tails Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Nash Equilibrium It is possible that a game has no equilibrium in pure strategies Player 2 head tails (1.

. Mattias Polborn Static games of complete information . and they sum to 1: j σij = 1. si2 . . σi2 . .. Note: σij ’s is the probability for strategy sij .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Mixed strategy equilibrium Definition (Mixed strategy) Let Si be the set of pure strategies of player i: Si = {si1 . siKi }.. A mixed strategy for player i is a probability distribution σi = (σi1 . σiKi ) on Si . . . Probabilities are nonnegative.

he could play p = 1/2. If p > 1/2. his expected payoff is p[q · 1 + (1 − q) · (−1)] + (1 − p)[q · (−1) + (1 − q) · 1] = (p − (1 − p))q + (1 − q)(1 − p − p) = (1 − 2p)(1 − 2q) If p < 1/2. player 2 is then also indifferent between all his strategies. Hence p = q = 1/2 is the unique (mixed strategy) NE in Matching Pennies. What is player 1’s optimal mixed strategy? If player 1 is playing heads with probability q. player 1 could choose q = 1/2. In particular. any strategy delivers the same payoff. then it is optimal to choose q = 1 (only heads. and in particular. For p = 1/2.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Mixed strategy equilibrium Suppose player 2 plays heads with probability p(= σ21 ). by symmetry. then it is optimal to choose q = 0 (only tails). Mattias Polborn Static games of complete information .

σi∗ ) ≥ Eui (σ−i . . . σn ) is a mixed strategy Nash equilibrium if ∗ ∗ Eui (σ−i . . putting the whole probability mass on one particular strategy. . Hence this definition is not “another case”.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Mixed strategy equilibrium Definition (Mixed strategy NE) ∗ ∗ The strategy combination σ ∗ = (σ1 . Mattias Polborn Static games of complete information . A pure strategy is a special case of a mixed strategy. σi ) for all of i’s feasible mixed strategies. but generalizes our earlier definition of NE. σi ∈ Σi . and for all players i.

His expected utility is ∗ ∗ σi1 Eui (si1 . σi1 + σi2 = 1). Mattias Polborn Static games of complete information . σ−i ) and hence ∗ ∗ σi2 [Eui (si2 .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Mixed strategy Nash equilibrium Consider a game with a NE in which player i plays a (non-trivial) mixed strategy which puts positive probability of his strategies 1 and 2 (σi1 > 0. we can obtain ∗ ∗ σi1 [Eui (si1 . σi2 > 0. σ−i ). σ−i ) + σi2 Eui (si2 . σ−i ) − Eui (si2 . this is at least as good as what i can obtain by playing si1 : ∗ ∗ ∗ σi1 Eui (si1 . σ−i ) ≥ Eui (si1 . σ−i ) + σi2 Eui (si2 . Since we have a Nash equilibrium. σ−i ) − Eui (si1 . σ−i )] ≥ 0 by arguing that the mixed strategy must be at least as good as si2 . σ−i )] ≥ 0 Similarly.

σ−i ) − Eui (si1 . This provides a practical method for finding a mixed strategy equilibrium. Mattias Polborn Static games of complete information . σ−i ) − Eui (si2 . σ−i )] ≥ 0 ∗ ∗ hold simultaneously if and only if Eui (si1 . σ−i ). σ−i )] ≥ 0 and ∗ ∗ σi1 [Eui (si1 . σ−i ) = Eui (si2 . Player i’s equilibrium mixed strategy is as good for him as any other mixed strategy.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Mixed strategy Nash equilibrium ∗ ∗ σi2 [Eui (si2 .

Mattias Polborn Static games of complete information . 1950)) Every game which has finitely many players and finitely many strategies for each player has at least one NE in pure or mixed strategies. a Nash equilibrium in mixed strategies exists: Proposition (Nash equilibrium existence (Nash.Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Existence of Nash Equilibria Under fairly general conditions. Intuition for proof (for two players): Best-response correspondence is nonempty and “hemi-continuous” Best response correspondences must intersect. Formally: Kakutani’s fixed point theorem.

and the payoff functions ui are quasi-concave in s. 2 Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Dominance Nash equilibrium Mixed strategy Nash equilibrium Existence of Nash Equilibria Existence of Nash equilibrium Another existence result: Consider a game which has finitely many players and strategy spaces that are closed and bounded subsets of the Euclidean space. then there exists a (pure or mixed-strategy) Nash equilibrium. 1 If all payoff functions ui are continuous. in addition. the strategy spaces are convex sets. If. then there exists a NE in pure strategies.

The price results from the (inverse) demand function P(X ). a representation in a table is therefore impossible. Mattias Polborn Static games of complete information . where X = x1 + x2 is total output: P(x1 + x2 ) = a − b(x1 + x2 ) Note: Both firms have infinitely many possible strategies in this case.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Cournot oligopoly (Auguste Cournot. respectively. 1838) Two firms (1 and 2) with constant marginal costs c (and no fixed costs) produce a homogeneous good. They choose quantities x1 and x2 simultaneously.

By similar arguments (maximize profits of firm 2). the reaction function of firm 2 is x1 = R1 (x2 ) = x2 = R2 (x1 ) = Mattias Polborn Thus a − c − bx1 2b Static games of complete information . the optimal x1 satisfies a − 2bx1 − bx2 − c = 0 a − c − bx2 2b R is called the reaction function or optimal reply function for firm 1.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Cournot oligopoly Profit of firm 1: π1 = [a − b(x1 + x2 )]x1 − cx1 If firm 1 knew the output of firm 2 (x2 ).

quantities must be mutually best responses. Solving this linear equation system yields: ∗ x1 = a−c ∗ = x2 3b The total quantity produced is X = 2 a−c . but less than under perfect competition ( a−c ) 2 b b (check! what is the intuitive reason?).e.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Cournot oligopoly In the NE. x2 ) such that both of these equations hold. Mattias Polborn Static games of complete information . we are ∗ ∗ looking for a pair (x1 . this is more than in a 3 b monopoly ( 1 a−c ). i.

demand splits 50:50. if both prices are equal. p1 = p2 = c is a Nash equilibrium. 1 2 3 pi < c for at least one firm? pi > pj > c? pi > pj = c? Mattias Polborn Static games of complete information . Uniqueness. Firms choose prices simultaneously. Clearly. all consumers buy from the cheaper firm. There is a unique NE in which both firms choose p1 = c = p2 .Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Bertrand oligopoly (Joseph Bertrand. if one price is lower than the other. 1883) 2 firms. homogeneous good.

substitutes. Mattias Polborn Static games of complete information 2 3 . although there are only two competitors) is called Bertrand paradox.e.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Bertrand oligopoly 1 p1 = p2 = c is the unique equilibrium. The equilibrium concept in both games is the same! The result of the Bertrand model (price equal marginal cost as with perfect competition. but players use weakly dominated strategies! Many authors speak of a Bertrand or Cournot equilibrium. but not perfect ones). that actually the games in the Bertrand or Cournot model are different. A solution is to analyze repeated competition (→ dynamic games). however. Note. a Bertrand game has an equilibrium with prices greater than marginal costs. Another solution is that if goods are not homogeneous (i.

Two retailers are located at the opposite ends of main street (i.. Consumers have unit demand. distributed on the interval [0. and have a traveling cost of t times the distance to the retailer from which they choose to buy. Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Horizontal differentiation In an city. value the good at v . 1]. the other one at 1). one sits at 0. there is a unit mass of consumers.e.

Example: Split $ 100 among 3 legislators Mattias Polborn Static games of complete information .Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Political competition (Hotelling-Downs model) Voters’ “ideal points” distributed on [0. 1] 2 candidates choose their platform Each voter votes for the candidate who is closer to his ideal point Median voter theorem Multidimensional policy models often do not have a Condorcet winner / pure strategy equilibrium.

B → y) α A’s success probability: x αx α +y Interpretation of α α A: max x αx α − x +y B: max x αy α − y +y −1=0 Assume symmetry (y = x): αx 2α−1 = 4x 2α ⇒ x = α/4 What happens if α > 2? Discussion: Rent dissipation and the welfare loss from monopoly Mattias Polborn Static games of complete information α αx α−1 [x α +y α ]−x α αx α−1 [x α +y α ]2 .) Competitors can spend money to influence the bureaucrat. choose a supplier for a government contract. etc.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict Lobbying Bureaucrat has to award a “prize” of value 1 to one of two competitors (e.g. award Olympic Games to a host city.. (A→ x.

Cows produce milk. xi : number of cows of farmer i. every farmer can decide how many cows to graze on the commons.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict The problem of the commons The “commons” is an expression for a meadow which belongs to all farmers of a community together. A cow produces f (X ) units of milk. X = n xi i=1 The price of a cow is c. which has a price of 1 (this is just a normalization). There exists x such that f (¯) = c. with f (X ) < 0 (more cows → less grass per cow → less milk per cow) and f < 0 (for the second order condition. ¯ x Mattias Polborn Static games of complete information . but also plausible). cows are divisible.

. farmer i maximizes his profit: max[f (x1 + . . . + xn ) − c]xi xi Condition for an optimum (differentiation wrt xi ): f (xi∗ + j=i xj∗ ) + xi∗ f (xi∗ + j=i xj∗ ) − c = 0 There is a unique symmetric NE in the problem of the commons. Mattias Polborn Static games of complete information . + xi + . .Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict The problem of the commons Given the other farmers’ decisions. The total number of cows is too large compared to the social optimum.

or the FOC for farmer j is not satisfied (or both). so marginal revenue f + xf must be equal for all farmers. X ∗ > X FB .Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict The problem of the commons Equilibrium with xi = xj ? → either the FOC for farmer i is not satisfied. Mattias Polborn Static games of complete information . Social optimum: Maximize Xf (X ) − cX f (X FB ) + X FB f (X FB ) − c = 0 Compare. The symmetric equilibrium is characterized by f (nx ∗ ) + x ∗ f (nx ∗ ) − c = 0. marginal cost is equal for all farmers. (Economically. The LHS of the first one is greater than f (nx ∗ ) + nx ∗ f (nx ∗ ) − c = f (X ∗ ) + X ∗ f (X ∗ ) − c. Hence. and that is possible only if all farmers choose the same quantity x).

∗ First. which gives it a (discounted) payoff of Π. Once one firm gives up the fight.Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict War of attrition There are two firms in a market which is a natural monopoly. there are two asymmetric equilibria in pure strategies: (s1 = ∗ = never give up) and vice versa. s2 More interestingly. the other firm is the monopolist for all the following time. As long as two firms are in the market. Suppose firm 2 plays a mixed strategy “Stop at time t according to the distribution function F (t) / density f (t)” Mattias Polborn Static games of complete information . there is a symmetric mixed strategy equilibrium. give up at once. both firms make a loss of dt in every small time interval of length dt.

Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict War of attrition The expected rent for firm 1. is t −[1 − F (t)] t + 0 [−s + Π]f (s)ds Firm 1 must be indifferent between all possible stopping times. if firm 1 decides at time 0 to stop to fight at time t (of course only relevant if firm 2 is still fighting then). hence we can differentiate this expression with respect to t and set this equal to zero: f (t) t − [1 − F (t)] + [Π − t]f (t) = 0 Mattias Polborn Static games of complete information .

Introduction Definitions Solution concepts for static games Applications of static games Oligopoly models Political models Externalities Conflict War of attrition From this we get f (t) Π 1 − F (t) expected yield of fighting longer f (t)dt 1−F (t) : = (expected) cost Hazard rate at t: Instantaneous probability that the other player will stop in the next short interval of length dt. Integrate the last equation to get T 0 f (t) dt = 1 − F (t) T 0 Π dt − ln(1 − F (t))|t=T = t=0 − ln(1 − F (T )) = t t=T | Π t=0 /Π)T T ⇒ F (T ) = 1 − e −( Π Mattias Polborn Static games of complete information .

Sign up to vote on this title
UsefulNot useful