You are on page 1of 144

Topic Subtopic

Economics & Finance Economics

Understanding Economics
Game Theory
Course Guidebook

Jay R. Corrigan
Kenyon College
4840 Westfields Boulevard | Suite 500 | Chantilly, Virginia | 20151‑2299
[phone] 1.800.832.2412 | [fax] 703.378.3819 | [web] www.thegreatcourses.com

LEADERSHIP
PAUL SUIJK President & CEO
BRUCE G. WILLIS Chief Financial Officer 
JOSEPH PECKL SVP, Marketing
JASON SMIGEL VP, Product Development
CALE PRITCHETT VP, Marketing
MARK LEONARD VP, Technology Services
DEBRA STORMS VP, General Counsel
KEVIN MANZEL Sr. Director, Content Development
ANDREAS BURGSTALLER Sr. Director, Brand Marketing & Innovation
KEVIN BARNHILL Director of Creative
GAIL GLEESON Director, Business Operations & Planning

PRODUCTION TEAM
TRISH GOLDEN Producer
SUSAN DYER Content Developer
ABBY INGHAM LULL Associate Producer
DANIEL RODRIGUEZ Graphic Artists
BRIAN SCHUMACHER
OWEN YOUNG Managing Editor
CHRISTIAN MEEKS Editor
CHARLES GRAHAM Assistant Editor
CHRIS HOOTH Audio Engineer
ROBERTO DE MORAES Director
GEORGE BOLDEN Camera Operators
MATTHEW CALLAHAN
VALERIE WELCH Production Assistant

PUBLICATIONS TEAM
FARHAD HOSSAIN Publications Manager
TIM OLABI Graphic Designer
JESSICA MULLINS Proofreader
ERIK A ROBERTS Publications Assistant
RENEE TREACY Fact-Checker
WILLIAM DOMANSKI Transcript Editor

Copyright © The Teaching Company, 2021


Printed in the United States of America
This book is in copyright. All rights reserved. Without limiting the rights under copyright reserved
above, no part of this publication may be reproduced, stored in or introduced into a retrieval
system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying,
recording, or otherwise), without the prior written permission of The Teaching Company.
Professor Biography

Jay R. Corrigan
Kenyon College

J
ay R. Corrigan is a Professor of Economics at Kenyon College.
He earned a BA in Economics from Grinnell College and
a PhD in Economics from Iowa State University.
Professor Corrigan’s writing has appeared in The Washington Post
and Barron’s. His scholarly publications in economics, public health,
and substance abuse journals have been cited more than 1,000 times.
His research has been covered by news outlets such as ABC, NBC,
and BBC World News, and his work was included in a Washington
Post list of the 10 best works on political economy in 2018.
Professor Corrigan is a recipient of Kenyon College’s Trustee
Teaching Excellence Award, and The Princeton Review named him
one of America’s best college professors. ■
i
Understanding Economics: Game Theory
Professor Biography

TABLE OF CONTENTS
Introduction
Professor Biography . . . . . . . . . . . . . . . . . . . . . . . . . . i
Course Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Guides
1 Game Theory Basics: The Prisoner’s Dilemma . . . . . 4
2 Repeated Prisoner’s Dilemma Games . . . . . . . . . . . 14
3 The Game of Chicken . . . . . . . . . . . . . . . . . . . . . . 24
4 Reaching Consensus: Coordination Games . . . . . . . 33
5 Run or Pass? Games with Mixed Strategies . . . . . . . 42
6 Let’s Take Turns: Sequential-Move Games . . . . . . . 54
7 When Backward Induction Works—and Doesn’t . . . 64
8 Asymmetric Information in Poker and Life . . . . . . . 72
9 Divide and Conquer: Separating Equilibrium . . . . . . 82
10 Going Once, Going Twice: Auctions as Games . . . . 92
11 Hidden Auctions: Common Value and All-Pay . . . . 101
12 Games with Continuous Strategies . . . . . . . . . . . 110

Supplementary Material
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Image Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

ii TO GO BACK TO THE TABLE OF CONTENTS CLICK ON ii


Course Scope

UNDERSTANDING ECONOMICS:

GAME THEORY
 COURSE SCOPE

G
ame theory is a framework for thinking more clearly and carefully
about strategic interactions in business, politics, international
relations, and even biology. Though it began as a niche field of
mathematics, game theory has become so important to economics that it’s
now covered in every introductory textbook and is a central part of every
graduate student’s first-year coursework.
A better understanding of game theory allows you to explain the otherwise
inexplicable. Why, for example, do corporate executives risk prison time
by conspiring to raise prices? Why is a college degree so important in the
job market even when the degree isn’t related to the job? Game theory can
also answer questions well beyond the traditional boundaries of economics.
For example, why do people confess to crimes they didn’t commit? Why,
during World War I, did peace break out spontaneously at points all along
the Western Front?
Lesson 1 introduces the most fundamental concepts in game theory: players,
strategies, payoffs, and finding an equilibrium. Once you’ve developed these
tools, you’ll apply them to the prisoner’s dilemma, the most famous—and
the most famously frustrating—game in the field.
1
Understanding Economics: Game Theory Course Scope

In lesson 2, you’ll see how playing a game repeatedly can lead to the
emergence of cooperation even among players who are ostensibly at odds
with one another. You’ll apply lessons learned from repeated games to
understand price-fixing conspiracies and military history.
Lesson 3 focuses on conflict. You’ll learn that the game of chicken, where
drivers speed toward one another until one swerves, has more than one
equilibrium. Irresponsible though it may be, this game will help you
understand why animals fighting for mating rights rarely injure one another,
even when they have savagely sharp teeth or claws.
Why do people drive on the right in some countries and on the left in others?
Why did the movie industry choose VHS cassettes over Betamax? These
are all examples of coordination. In lesson 4, you’ll learn about when people
are likely to coordinate and when they aren’t.
While it’s often best to settle on a strategy and stick with it, it doesn’t always
pay to be predictable. There are situations where you do best by keeping
people guessing. But behaving randomly doesn’t have to mean behaving
arbitrarily. Being unpredictable in a specific way can improve your payoff
and keep others from taking advantage of you. In lesson 5, you’ll apply this
lesson to football.
Other games unfold sequentially, with you getting to see the choice
your opponent makes before you make yours. In lesson 6, you’ll see how
backward induction—starting at the end of the game and working back to
the beginning—allows you to solve these sequential-move games.
But backward induction can also lead to predictions too counterintuitive to
be believed. In lesson 7, you’ll consider some of the most famous thought
experiments in game theory and try to find ways to reconcile theory’s
predictions with people’s actual behavior.
Lessons 8 and 9 discuss games of private information. You’ll learn how
to overcome information asymmetries and apply those lessons to poker,
business, and dueling.

2 2
Course Scope

Lessons 10 and 11 discuss auctions, which drive huge segments of the


economy. Indeed, they’ve become so commonplace, you’re often taking
part in one without even knowing it.
Most games you’ll see in this course limit players to a fixed number of
strategies. But in some situations, you face a continuum of choices. How
do you choose from an infinite array of options? In lesson 12, you’ll learn
how a few lines of calculus can make seemingly impossible decisions
manageable. ■

3
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma

LESSON 1

GAME THEORY
BASICS: THE
PRISONER’S
DILEMMA

4 4
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma

W
hat do economists mean when they call something
a game? Here, it’s useful to draw a distinction
between a decision and a game. A decision is
when you choose what to do without regard for anyone else’s
response. A game, on the other hand, has two or more players
who choose what to do based on what they think other players
will do. In other words, a game is strategic, and game theory
is  the formalized study of interactions between strategic
players.

BIG PIG, LITTLE PIG


• Imagine there are two pigs confined to a pen with a feeding trough at
one end and a lever at the other. One of the pigs is big and strong but
slow, while the other pig is little and fast but weak. The pigs can press
the lever once, at which point 10 pounds of food will fall into the trough.
The question is which pig, if either, should push the lever, and which
pig, if either, should simply wait by the trough and hope the other pig
pushes the lever for them.
• If neither pig pushes the lever, no food is dispensed, and neither pig gets
to eat. But what if both pigs simultaneously press the lever and race back
to the trough? The little pig is faster, so she gets there first and manages
to eat some of the food before the big pig shoves her out of the way and
eats everything that’s left. Both pigs, in this case, expend the effort of
running to the lever and back.
• What if the little pig presses the lever while the big pig waits by the
trough? The big pig starts eating immediately, and because he’s
stronger, he doesn’t let the little pig have any of the food. This leaves
the little pig worse off than if she hadn’t pressed the lever at all, since
at least then she wouldn’t have wasted the effort of running to the lever
and back.

5
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma

• Finally, what if the big pig presses the lever while the little pig waits by
the trough? Now, the little pig starts eating immediately. Because the big
pig is slow, the little pig manages to eat most of the food before the
big pig gets back to the trough, shoves the little pig out of the way, and
eats what’s left. The big pig may not get a lot of food, but it more than
offsets the effort he exerts trudging to the lever and back.

SOLUTION
• To find the solution, think of this as a game. There are two players—the
little pig and the big pig—and each has two potential strategies—waiting
by the trough or pushing the lever. Each player’s payoff can be measured
in terms of the food they eat minus the effort they expend.

6 6
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma

• If the little pig thinks the big pig is going to press the lever, her best
response is to wait by the trough. That way, the little pig doesn’t waste
any energy, and she gets to eat most of the food before the big pig shoves
her out of the way.
• But what if the little pig thinks the big pig is going to wait by the trough?
Counterintuitively, perhaps, it’s still in her best interest to wait by the
trough. She won’t eat no matter what she does in this case, but if she
waits by the trough, at least she doesn’t waste any energy.
• This is the little pig’s dominant strategy—a strategy that’s a best response
no matter what the other player does. In this example, no matter what
the big pig plans to do, it’s always in the little pig’s best interest to wait
by the trough.
• The big pig knows what the little pig is going to do—wait by the
trough—because it’s her dominant strategy. All the big pig has to do
now is choose his best response. If he also waits by the trough, he doesn’t
get anything. If he pushes the lever, he doesn’t get much to eat, but he’s
better off than if he gets nothing.
• Because the little pig is going to wait by the trough, the big pig’s best
response is to push the lever. This result—where the little pig waits by
the trough and the big pig presses the lever—is what’s called a Nash
equilibrium.*
• A Nash equilibrium is a list of strategies for each player such that no
player has an incentive to change their strategy unilaterally. If the big
pig thinks the little pig is going to wait by the trough, the big pig has
no incentive to change his strategy from pushing the lever to waiting
by the trough. If he did, he’d go from getting a little to eat to getting
nothing to eat.

* The Nash equilibrium is named for mathematician John Nash, who


is played by Russell Crowe in the film A Beautiful Mind.
7
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma

• Likewise, if the little pig thinks the big pig is going to push the lever,
she has no incentive to change her strategy from waiting by the trough
to pushing the lever. If she did, she’d go from eating most of the food to
eating just some of it. Neither player can improve their own payoff by
changing their strategy, so this is a Nash equilibrium.

THE PRISONER’S DILEMMA


• The concepts of dominant strategy and Nash equilibrium can be used
to understand the prisoner’s dilemma, easily the most famous and the
most famously frustrating example in all of game theory.
• Imagine two career criminals—call them Tony and Uncle Junior—are
arrested in connection with a serious crime—say, armed robbery. There
isn’t enough evidence to convict either of them of armed robbery, but
there is enough evidence to convict both of something much less serious,
such as parole violation.

8 8
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma

• If both Tony and Uncle Junior


Guilt and innocence do
keep their mouths shut, they’ll
not matter in the context
be convicted of parole violation
of the prisoner’s dilemma.
and spend one year in prison. If, The payoffs are the same
on the other hand, both confess whether or not the players
to committing armed robbery, are guilty, so both players
both will be convicted of armed still have a dominant
robbery. But because they’ve strategy to confess.
spared the state the expense of
a trial, the judge will take it During the Salem witch
easy on them and send both to trials, 19 people, most
of them women, were
prison for a relatively moderate
hanged, and one more
five years.
was crushed by stones.
• The game is most interesting But these weren’t the only
when one player confesses and Salem residents accused of
the other denies. Let’s say Tony witchcraft; they were just
denies any knowledge of the the ones who didn’t confess
armed robbery, but Uncle Junior to witchcraft once they’d
agrees to cooperate with the been accused.
district attorney. Uncle Junior
admits to being involved in the
armed robbery, but he depicts Tony as the mastermind. In that case,
Uncle Junior will be set free in exchange for his cooperation, while Tony
will be sentenced to 20 years in prison.

SOLUTION
• What does game theory predict the prisoners will do if they’re
interrogated simultaneously but separately? Once again, this is a game
with two players, each with two potential strategies. They can either
confess or deny. Payoffs in this game are measured in terms of prison
sentences, where a longer sentence is worse from the standpoint of an
individual prisoner.

9
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma

• If Tony thinks Uncle Junior is going to confess, his best response is to


confess, too: He’ll spend five years in prison if he confesses but 20 years if
he denies. If Tony thinks Uncle Junior is going to deny, his best response
is still to confess—now, he’ll spend just one year in prison if he denies,
but he’ll get off completely free if he confesses.
• No matter what Uncle Junior is going to do, Tony is better off if he
confesses. This is his dominant strategy. And because Uncle Junior’s
payoffs look just like Tony’s, Uncle Junior’s dominant strategy is also
to confess.
• In games where both players have a dominant strategy, finding the
Nash equilibrium is easy. It’s simply where both players play their
dominant strategy—that is, when both Tony and Uncle Junior confess
to armed robbery.
• What makes the prisoner’s dilemma so frustratingly counterintuitive is
that both players would obviously be better off if both played deny rather
than confess. In that case, they’d both go to prison for just one year
rather than five. But we’re not likely to end up at that better outcome,
because it’s always in each individual player’s best interest to confess.

GOLDMAN’S DILEMMA
• A different version of the prisoner’s dilemma is Goldman’s dilemma,
named for Robert Goldman, a doctor specializing in sports medicine.
Between 1982 and 1995, he asked fighters, bodybuilders, and power
lifters the following question:
If I had a magic drug that was so fantastic that you’d win
every competition you would enter … for the next five years,
but it had one minor drawback—it would kill you five years
after you took it—would you still take the drug?
• Goldman found that more than half said yes—the median athlete in
this sample would die to win.

10 10
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma

• What does this mean for the use of performance-enhancing drugs in


sports? Suppose Hans and Franz are the world’s top bodybuilders. One of
them will be this year’s Mr. Olympia, the top honor in professional
bodybuilding. These two are so equally matched, there’s no predicting
who will win.
• This is a game with two players and two potential strategies: stay clean or
dope. The players’ payoffs can be measured in terms of the probability of
winning and the likelihood of suffering negative side effects from doping.
• What happens if both Hans and Franz stay clean? Because they’re both
equally matched, they both have a 50% chance of winning. But what if
Hans dopes while Franz stays clean? Now Hans has a clear edge, virtually
guaranteeing he’ll win the Mr. Olympia title. However, he’s also taking
on the risks associated with performance-enhancing drugs.

11
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma

• Finally, what if both use performance-enhancing drugs? In that case,


they remain equally matched. Both have a 50% chance of winning, but
both open themselves up to negative side effects.

SOLUTION
• In order for doping to be both players’ dominant strategy in this game,
both have to believe that increasing their chances of winning the Mr.
Olympia title by 50% is worth risking their life. That might sound
farfetched, but Robert Goldman’s research shows the typical power
athlete from his practice is willing to die to win.
• If Hans thinks Franz will stay clean, and assuming he’s willing to die to
win, his best response is to dope, increasing his chances of winning from
50% to 100%. If he thinks Franz will dope, his best response is still to
dope, increasing his chances of winning from 0% to 50%.
• The same logic applies to Franz, so both have a dominant strategy to
dope.** The Nash equilibrium is for both to dope, in which case they
are equally likely to win the Mr. Olympia title—just as they would have
been if both had stayed clean—but both also may suffer the potentially
deadly side effects from doping. This leaves both worse off than if they’d
stayed clean.
• The outcome of the prisoner’s dilemma game doesn’t always have to be
so relentlessly depressing. When the same two players play the game
together again and again, it’s possible they’ll learn to cooperate.

** Sadly, this doesn’t only apply to bodybuilding. Lance Armstrong


famously won seven consecutive Tour de France titles. Infamously,
he was then stripped of those titles when he admitted to doping.
There are no winners for the years 1999 through 2005, in part
because Jan Ullrich, who finished second three of the seven years
that Armstrong won, had also been banned for doping.
12 12
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma

READINGS
“£66,885 Split or Steal?”
Nasar, A Beautiful Mind.
“What’s Left When You’re Right?”

QUESTIONS
1. Find this game’s two pure-strategy Nash equilibrium outcomes.
Assume a higher payoff is better than a lower one.
Colin
Left Right

Up 0, 3 10, 10
Rose
Down 2, 1 5, 0

2 . Explain why this game is an example of a prisoner’s dilemma. Assume


a higher payoff is better than a lower one.
Colin
Left Right

Up 2, 2 6, 1
Rose
Down 1, 6 5, 5

13
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games

LESSON 2

REPEATED
PRISONER’S
DILEMMA
GAMES

14 14
Lesson 2 Repeated Prisoner’s Dilemma Games

T
his lesson focuses on what, if anything, changes when
games are played repeatedly. While games often have
a frustrating outcome when played once, a cooperative
outcome can be reached when games are repeated infinitely or
at least indefinitely. A cooperative outcome depends on things
like how patient the players are and how likely they think the
game is to end in the near future.

COOPERATION AND DEFECTION


• Imagine you and another player are playing a game where you each
have to decide whether to cooperate or defect.* If you both choose to
cooperate, you both earn $3. If you both choose to defect, you both earn
$1. But if you cooperate while the other player defects, she earns $5 and
you earn nothing. Likewise, if you defect while she cooperates, you earn
$5 and she earns nothing.

* Here, defecting simply means not cooperating.


15
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games

• If you think the other player is going to cooperate, your best response
is to defect. Defecting earns you $5, while cooperating earns you just
$3. If you think the other player is going to defect, your best response
is still to defect. Here, defecting earns you $1, while cooperating earns
you nothing.
• This means your dominant strategy in this game is to defect. It’s always
your best response, regardless of what you think the other player will
do. By that same logic, defecting is the other player’s dominant strategy
as well.
• Just as with the prisoner’s dilemma and Goldman’s dilemma, the Nash
equilibrium is for both players to play their dominant strategy, even
though the $1 payoffs at that Nash equilibrium are clearly worse than
the $3 payoffs you’d earn if you both cooperated.

REPEATED GAMES
• This game is more interesting, but just as frustrating, if you play it twice.
Imagine your opponent tells you that if you cooperate in the first of two
rounds, she’ll reward you in the second round by cooperating, earning
you both $3 per round.
• But there’s a serious problem here: Because you’re only playing two
rounds, in the second round, you have a strong incentive to defect no
matter what you did in the first round. Here, you start by thinking about
what makes sense in the last round and then work your way back to the
first round. This is called backward induction.
• In the second—and last—round, you both have an incentive to defect
regardless of what happened in the first round. You can’t credibly
promise to cooperate in the second round in exchange for the other
player’s cooperation in the first. You could say that’s what you’d do, but
it wouldn’t be wise for the other player to believe you, given what you
both know about the incentives you face in the second round.

16 16
Lesson 2 Repeated Prisoner’s Dilemma Games

• Since the other player knows you’re likely to defect in the second round,
she has no incentive to cooperate in the first round. Naturally, you
respond by defecting in the second round, but she knew you were going
to anyway.
• The outcome of the repeated game is the same outcome from the one-
time game, but twice. From a theoretical standpoint, nothing would
change if you played the game 20 times or even 200 times. In each case,
both players have an incentive to defect in the last round.

COOPERATION IN WORLD WAR I TRENCH WARFARE


• As it happens, an infinitely repeated game can have an outcome
completely different from a finitely repeated game. The problem
with the finitely repeated prisoner’s dilemma is the last round, where
everyone has a strong incentive to defect. But if the game goes on forever,
there’s no last round, so there’s always an incentive to cooperate in the
current round.
• The Evolution of Cooperation by Robert Axelrod is the classic book
on achieving the cooperative outcome in repeated prisoner’s dilemma
games. One example the book gives is World War I trench warfare.
• Imagine you’re a soldier, crouching in a trench, peering across no-man’s-
land at an enemy soldier crouching in his own trench. You can think of
this as a game with two players—your unit and his—and two potential
strategies: You can shoot to kill, or you can shoot to miss.
• If you both shoot to kill, both units endure heavy casualties but neither
gains or loses ground—a stalemate. If you both shoot to miss, no one
suffers casualties and no one gains or loses ground—a stalemate, but
one without casualties.
• But if your unit shoots to miss and your enemy’s shoots to kill, you suffer
casualties and lose ground. Likewise, if your unit shoots to kill and your
enemy’s shoots to miss, he suffers casualties and loses ground.

17
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games

• What should you do if you think the enemy is going to shoot to kill?
Your unit takes heavy casualties no matter what you do, but if you also
shoot to kill, at least you’re not overrun. And what should you do if you
think the enemy is going to shoot to miss? Your unit avoids casualties
no matter what you do, but if you shoot to kill, you also gain ground.
• Shooting to kill is your dominant strategy. No matter what you think
the enemy is going to do, your payoff is improved by shooting to kill.
Assuming the enemy thinks like you do, he also has a dominant strategy
to shoot to kill.
• If the game is only played once, game theory predicts everyone plays their
dominant strategy, in which case both sides suffer heavy casualties and
neither gains ground. This is a horrifying outcome, where thousands die
to no advantage for either side.

18 18
Lesson 2 Repeated Prisoner’s Dilemma Games

• But one of the unique features of World War I trench warfare was that
the same units faced one another day after day. They weren’t playing
this bloody version of the prisoner’s dilemma once—they were playing
it repeatedly. And because it was a repeated game, norms of cooperation
and trust could, and often did, develop. Geoffrey Dugdale, a British
army captain, said that he was
astonished to observe German soldiers walking about within
rifle range behind their own line. Our men appeared to take
no notice. … These people evidently did not know there was
a war on. Both sides apparently believed in the policy of “live
and let live.”**

TIT FOR TAT


• How did peace break out at points all along the 500-mile front?
To understand, think of this as a repeated prisoner’s dilemma game.
• Cooperation starts small. For example, you don’t shell their food trucks
because if you do, they’ll shell yours, and you, too, will go hungry.
Next, you don’t shoot anyone during mealtimes, or bad weather, or on
holidays. You could shoot them, but then they’d shoot back. Before
long, you’re never shooting anyone outside of major offensives ordered
by headquarters, and when you do shoot, you shoot to miss.
• This is an example of what Axelrod called tit for tat. In the first round,
you cooperate. In every subsequent round, you do whatever your
opponent did the round before. It’s a simple strategy, but it’s incredibly
effective at establishing and maintaining cooperation in a repeated
prisoner’s dilemma.

** There was nothing exceptional about the cooperation Captain


Dugdale saw. According to Tony Ashworth, a sociologist who
studied the live-and-let-live system, virtually every one of the 57
British divisions that fought in the trenches on the Western Front
engaged in this kind of impromptu truce.
19
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games

• Eventually, the Allied generals insisted on seeing the corpses that would
result from raiding the German trenches. Small but relentless Allied
raids, followed by the retaliation you’d expect from Germans playing
tit for tat, caused cooperation to break down and fighting to resume.

PRICE-FIXING IN HIGHER EDUCATION


• The next example, price-fixing, is when businesses or other organizations
get together and agree to charge their customers higher prices. This is
good for the businesses, which can now earn larger profits, but it’s bad
for the ordinary consumers who have to pay those higher prices.
• Even though price-fixing is illegal and can land corporate executives in
prison, it still happens. Firms do, after all, want to maximize profits.
What is surprising from an economic standpoint is that the price-fixing
agreements last. That’s because an agreement to raise prices is a lot like
a prisoner’s dilemma.

20 20
Lesson 2 Repeated Prisoner’s Dilemma Games

• An example can be seen in higher education. Imagine that you and


another player are each in charge of choosing whether to offer this year’s
incoming class generous or stingy financial aid.
• If you both offer generous
financial aid, you both enroll a
strong class, but you both have to Up until 1991, a group
have a large financial aid budget. of 57 elite colleges
If you both offer stingy financial allegedly belonged to
aid, you both enroll a strong class, a price-fixing cartel.
but you’re able to do it with only a The agreement fell apart
small financial aid budget (which, after more than 30 years
to be clear, isn’t good from the of cooperation because
students’ standpoint, but you of an inquiry launched
might imagine a college would by the Department of
find this preferable to spending Justice. Because there
more lavishly on financial aid). was a good chance they
would be compelled to
• The most interesting outcome is stop cooperating at some
when one player cooperates and point in the near future,
the other defects. In this version the colleges had a higher
of the game, that means you offer incentive to defect.
generous financial aid while the
other player is stingy. In that
case, you enroll a great class at the expense of having a large financial
aid budget, and the other player enrolls a weak class but with a small
financial aid budget.
• In order for offering generous aid to be a dominant strategy, it has to be
true that you both prefer a strong class and a large financial aid budget
over a weak class and a small financial aid budget. Likewise, you have
to prefer a great class and a large financial aid budget over a strong class
and a small financial aid budget. Assuming all this is true, both of you
have a dominant strategy of offering generous aid.

21
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games

• If the game is played only once, the Nash equilibrium is where you both
play your dominant strategy. This is a wonderful outcome for students,
since everyone gets generous financial aid, but you both could have
attracted classes that were just as strong and less expensive if you’d agreed
to offer stingy aid.
• You don’t expect to see that kind of cooperation in a one-time game, but
you may manage to cooperate in an infinitely repeated game. Colleges
could reasonably think of themselves as playing a game that perhaps isn’t
infinitely repeated but has no clear end date. Since there’s no known final
round, there’s always an incentive to protect your future reputation by
cooperating in this round.

DISCOUNT RATE
• The equilibrium can be for both of you to cooperate by offering
stingy financial aid in every period—if, that is, you’re patient
enough. This means you have to care enough about what happens in
the future. Or, as economists put it, you have to have a low enough
discount rate.
• What do you gain if you cheat on your price-fixing agreement by offering
generous financial aid today? You get a great class this year, though it
comes at the expense of a larger financial aid budget. On balance, you
see that as a good thing.
• But what do you lose by cheating? Because you double-crossed it, the
other college will offer generous financial aid from now on. Knowing
that, you’ll want to offer generous financial aid as well, leaving you both
at the Nash equilibrium outcome from the one-time game, which is
a worse outcome than if you’d both continued to cooperate.
• Is getting a better outcome today worth getting a worse outcome
in every future period? It depends on how patient you are. If
you feel like what happens next year is just as important as what
happens today, you’ll surely want to cooperate with the agreement.

22 22
Lesson 2 Repeated Prisoner’s Dilemma Games

A better payoff today won’t outweigh a worse payoff forever after. But


if the future isn’t as important as the present, then you might choose to
cheat on the agreement today.

READINGS
Axelrod, The Evolution of Cooperation.
Case, “The Evolution of Trust.”
Kingston and Wright, “The Deadliest of Games.”

QUESTIONS
1. Rose and Colin are playing a repeated version of the prisoner’s dilemma
game. If Rose is playing tit-for-tat and Colin always defects, what will
each player’s payoff be in the first round? What will their payoffs be
in subsequent rounds?
Colin
Cooperate Defect

Cooperate 2, 2 0, 3
Rose
Defect 3, 0 1, 1

2 . Rose and Colin are again playing a repeated version of the same
prisoner’s dilemma game. If both Rose and Colin now play tit-for-tat,
what will each player’s payoff be in the first round? What will their
payoffs be in subsequent rounds?
Colin
Cooperate Defect

Cooperate 2, 2 0, 3
Rose
Defect 3, 0 1, 1

23
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken

LESSON 3

THE GAME
OF CHICKEN

24 24
Lesson 3 The Game of Chicken

I
n another type of simultaneous-move game, two players
get into an argument, and they decide the only way to
settle it is to play a game of chicken. That means they’re
going to get into their cars and drive straight at one another
as fast as their cars can go, until, at the very last second, the
players have to simultaneously decide whether to swerve or to
keep driving straight.

MEASURING PAYOFFS
• Payoffs in this game can be measured in utils*—a unit that economists
use to measure satisfaction, especially when there is no other natural
measure. The more satisfying an outcome is, the more utils you derive
from that outcome.

* The word util is short for “utility.”


25
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken

• Imagine that at the very last second, you decide to keep driving straight,
while your opponent simultaneously decides to swerve. You look tough,
so you’ll have a payoff of 1 util. Your opponent looks weak, so he’ll have
a payoff of −1 util. And the opposite is true if your opponent decides to
keep driving while you decide to swerve.
• If you both swerve—assuming you don’t crash into each other in the
process—it’s a draw. Neither of you looks tougher than the other, and
neither looks weaker, so you’ll both earn a payoff of 0.
• If you both drive straight as fast as you can go, it’s fair to assume you’d
both be horribly injured—perhaps even killed—in the crash. Figuring
out the payoff in this situation requires a value judgment: Is it worse to
suffer a horrible, life-threatening injury or to look weak?
• Opinions will vary, but for the sake of the game, let’s say suffering
a life-threatening injury is worse than looking weak. Both you and your
opponent will receive −10 utils in this situation.

SOLUTION
• If you think your opponent is going to drive straight, what’s your best
response? Your payoff is −1 if you swerve and −10 if you drive straight.
Neither option is good from your standpoint, but −1 is less bad, so your
response is to swerve.
• If you think your opponent is going to swerve, your best response is
to drive straight, because your payoff is 1 if you drive straight and 0 if
you swerve.
• Unlike the big pig, little pig and prisoner’s dilemma games, in the game
of chicken, no player has a dominant strategy. But that doesn’t mean this
game has no Nash equilibrium. Two can be found using best-response
analysis: one where you drive straight and your opponent swerves, and
one where you swerve and he drives straight.**

** Both players swerving is not a Nash equilibrium, because it’s not


a point where the game can come to rest. If you truly believe your
opponent is going to swerve, you won’t want to swerve, because
26 you can get a higher payoff by driving straight. 26
Lesson 3 The Game of Chicken

• Unlike in the prisoner’s dilemma, where game theory gives a specific


prediction of what we expect to see happen, game theory in the game of
chicken tells us what we don’t expect to see. We don’t expect both players
to drive straight and crash, and we don’t expect both to swerve.
• Neither player has a dominant strategy, but there are two pure-strategy
Nash equilibria, both of which involve each player doing what the other
player doesn’t. If you drive straight, your opponent should swerve.
If he swerves, you should drive straight.

THE HAWK–DOVE GAME


• A real-life application of the chicken game comes from biology. It’s
based on a paper by John Maynard Smith and George Price titled
“The Logic of Animal Conflict.” The authors use game theory and
computer simulations to understand why animals fighting over mates
usually only engage in limited war, meaning the conflict doesn’t lead
to serious injuries.
• This is true even for animals that possess fearsome offensive weapons,
such as fangs, horns, or antlers. For example, the authors point to male
snakes that wrestle with one another but don’t use their fangs. This is
a puzzle—a male willing to do whatever it takes to beat his rival would
seem to have a major advantage in the competition for mates. Why, then,
do we most often see limited war?
• We can capture the spirit of the authors’ results with a one-time game
where each player has two strategies: hawk and dove.*** A hawk will
always fight and won’t stop fighting until it’s either won or been gravely
injured. A dove, on the other hand, makes an initial display of fighting
but will always yield to a hawk and will always share with another dove,
splitting the gain from winning.

*** Today, this game is universally known as the hawk–dove game, but
the word dove did not appear in the original paper. Instead, the
authors referred to the second strategy as mouse.
27
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken

SOLUTION
• Payoffs in this game will account for the gains from winning, the cost of
being seriously injured while losing a fight, and the cost associated with
a dove yielding to a hawk. We can use the following numerical values:
ʶ V is the gain from winning, which will be 40.
ʶ C is the cost of losing, which will be −80.
ʶ Y is the cost of yielding, which will be 0.
• Imagine you’re one of the animals playing this game. If you and your
opponent both play hawk, you’ll both fight until you’ve either won or
been injured by your opponent. Assuming you’re equally matched, each
of you has a 50% chance of winning and a 50% chance of losing. Since we
can’t be sure what will happen, we need to calculate your expected payoff:
the probability—or the fraction of time—some events occurs times the
payoff a player receives if that event occurs.
• You and your opponent are equally matched, meaning that if the two of
you were to play this game again and again, we’d expect you to win half
of your encounters and lose the other half. So your expected payoff from
any one encounter is the probability you win times your payoff if you
win, plus the probability you lose times your payoff if you lose:
1/2 × 40 + 1/2 × (−80) = −20.
• This gives you an expected payoff of −20. And because the game looks
exactly the same from your opponent’s perspective, he also has an
expected payoff of −20.
• Things are more straightforward for the other three outcomes. If you
play hawk and your opponent plays dove, he yields, meaning you win.
In that case, you earn a payoff of 40, and your opponent earns 0.
• If your opponent plays hawk and you play dove, you yield, meaning he
wins. You earn a payoff of 0, and your opponent earns a payoff of 40.

28 28
Lesson 3 The Game of Chicken

• Finally, if you both play dove, we assume you split the gain from winning.
Both you and your opponent earn a payoff of 20 (half of 40).
• You can now use best-response analysis to find this game’s pure-strategy
Nash equilibria. If you think your opponent will play hawk, your best
response is to play dove. And if you think your opponent will play
dove, your best response is to play hawk. The same is true from your
opponent’s perspective.
• Like the original game of chicken, this game has two pure-strategy Nash
equilibria, with another mixed-strategy Nash equilibrium lurking in the
background. This shows that we don’t expect the population to be made
up entirely of hawks or entirely of doves; instead, we expect there to be
some mix of the two.

29
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken

REAL-LIFE APPLICATIONS
• Animals are not rational, strategic actors who know their payoffs and
make choices that maximize payoffs given what they believe other players
will do. So how can this kind of analysis explain animal behavior?
• The answer is evolution. A bighorn sheep, for example, doesn’t choose
to play hawk or dove. Instead, he’s born with genes that determine his
behavior—whether he will play hawk or dove.
• Depending on the conditions, one of those strategies may be associated
with a higher level of fitness. A sheep with those genes will have a fitness
advantage, so he will be more likely to find a mate and pass his genes
on to the next generation. This means more of the bighorn sheep in
the next generation will be born with genes associated with the more
successful strategy.
• Imagine a population of bighorn sheep made up entirely of doves.
This may seem like a utopian paradise, but it can’t last, because it’s not
an equilibrium.
• If one male is born with a genetic mutation that causes him to play hawk
rather than dove, he will have a tremendous fitness advantage. When
he plays the hawk–dove game, he will necessarily play against a dove,
meaning he will necessarily win.
• This means there will be more hawks in the next generation. Each of
those—born into a population now made up almost entirely of doves—
will also enjoy a fitness advantage. But unlike their father, they’ll
occasionally encounter another hawk, which will mean a fight and an
outcome with a negative expected payoff.

LIMITED WAR
• This pattern may continue for many generations, but it won’t continue
indefinitely. The population will eventually reach a point where hawks
are no longer more fit than doves.

30 30
Lesson 3 The Game of Chicken

• Imagine you’re a hawk born into a population that’s now half doves
and half hawks. There’s a 50% chance that when you play the game,
you’ll play against another dove. You’re sure to win that encounter,
guaranteeing you a payoff of 40. But there’s also a 50% chance your
opponent will be a hawk. The two of you will fight, earning you an
expected payoff of −20.
• Your expected payoff is the probability you face a dove times your payoff
if you face a dove, plus the probability you face a hawk times your
expected payoff if you face a hawk:
1/2 × 40 + 1/2 × (−20) = 10.
• Now, imagine you’re a dove born into that same population. There’s
a 50% chance your opponent will be a dove. You’ll share, earning you
a payoff of 20. The other 50% of the time, your opponent will be a
hawk. You’ll immediately yield, earning a payoff of 0. Calculating your
expected payoff the same way you did for the hawk, you get
1/2 × 20 + 1/2 × 0 = 10.
• On average, the dove earns a payoff of 10, just like the hawk. This is
called evolutionarily stable equilibrium. A population like this one can’t
be successfully invaded by a mutant the way the all-dove population was
successfully invaded by a mutant hawk.
• A mutant hawk born into a 50/50 population will tip the scale slightly in
favor of the doves, meaning doves would enjoy a slightly higher expected
payoff and a slight fitness advantage, moving the population back to
evolutionarily stable equilibrium.
• Animals fighting for mates tend to engage in limited war because, at the
evolutionarily stable equilibrium, we expect the population to be made
up of a mix of hawks and doves. There will be conflict, but fights will
only end in one animal being seriously injured in the relatively rare case
where a hawk encounters another hawk.

31
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken

READINGS
Leeson, “Oracles.”
Smith and Price, “The Logic of Animal Conflict.”

QUESTIONS
1. True or false: When playing chicken, your dominant strategy is to
drive straight.
2 . Fill in the blanks in the following payoff matrix so it’s clear this is
a game of chicken. Assume a higher payoff is preferred to a lower one.

Colin

Straight Swerve

Straight _____, _____ 2, 0


Rose
Swerve 0, 2 _____, _____

32 32
Lesson 4 Reaching Consensus: Coordination Games

LESSON 4

REACHING
CONSENSUS:
COORDINATION
GAMES

33
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games

O
n the surface, coordination games look a lot like the
game of chicken. But where chicken is a game of
conflict, coordination games are about consensus.
One thing the games do have in common is that they both
have multiple equilibria. This lesson focuses on ways to choose
between equilibria in a game that has more than one.

WIRELESS CHARGING
• According to the technology website CNET, the next big advancement
in how we charge our devices will be over-the-air wireless charging.
But this new technology won’t take off until the industry settles on
a single standard.
• Imagine a game with two players—Apple and Samsung—who must
simultaneously choose one of the approved standards for over-the-air
wireless charging—Powercast or WattUp—to incorporate into their
next generation of phones. Their strategies, then, are the two charging
standards they can choose between.
• In the simplest version of this game, we’ll assume the two charging
stations have some minor differences but are equally good. All that
matters from Apple and Samsung’s standpoint is whether they adopt
the same standard or different ones. If they adopt the same standard, it
will catch on quickly and people will be eager to buy new phones.
• But if Apple and Samsung adopt different wireless charging standards,
companies like Toyota, Boeing, and Starbucks won’t be so eager to
incorporate either technology into their cars, planes, and coffee shops,
because it’s not so clear which of the standards will catch on. This gives
the phone-buying public less of an incentive to buy new phones.

SOLUTION
• If both Apple and Samsung adopt the Powercast standard, each phone
maker earns a relatively high payoff. Suppose each firm’s profits increase
by $30 billion. The same is true if both Apple and Samsung choose the
34 34
Lesson 4 Reaching Consensus: Coordination Games

WattUp standard. But if one of the phone makers chooses Powercast


and the other chooses WattUp, each phone maker earns a relatively low
payoff of $10 billion.
• Apple and Samsung both have the best response of choosing whichever
standard the other chooses. This earns each a payoff of $30 billion rather
than the $10 billion each would receive if they chose different standards.
• Unlike in the prisoner’s dilemma, neither player in this game has
a dominant strategy. There’s no one strategy that offers a better payoff
no matter what the other player chooses.
• Like in the game of chicken, best-response analysis shows that this game
has two pure-strategy Nash equilibria. But unlike in the game of chicken,
these equilibria involve both players making the same choice, not the
opposite choice.

35
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games

• This is called a game of pure coordination. It doesn’t matter which


standard Apple and Samsung coordinate on, so long as they coordinate.

ASSURANCE GAMES
• Suppose that while WattUp can charge devices that are up to 15 feet away,
Powercast can charge devices as far as 80 feet away. Because Powercast
is a superior technology, both Apple and Samsung will earn a payoff of
$50 billion, rather than $30 billion, if they coordinate on Powercast.
• If Apple thinks Samsung is going to choose Powercast, its best response
is to choose Powercast as well, earning a payoff of $50 billion rather than
the $10 billion it would earn with WattUp. If Apple thinks Samsung is
going to choose WattUp, its best response is to choose WattUp as well,
earning a payoff of $30 billion rather than just the $10 billion it would
earn with Powercast.

36 36
Lesson 4 Reaching Consensus: Coordination Games

• Interestingly, both Apple and Samsung have the same two Nash
equilibria in pure strategies: one where both choose Powercast and
one where both choose WattUp. The fact that Powercast is clearly the
superior technology doesn’t eliminate the WattUp equilibrium.
• This variation on the coordination game is called an assurance game.
It differs from the pure coordination game only in that both players agree
that one of the Nash equilibria outcomes is clearly better than the other.
• Now, imagine Apple and Samsung get together to discuss their strategies
before playing the game. We can be pretty confident they’d agree on
the Powercast standard. Once the two players have assured one another
they’ll both choose Powercast, neither has an incentive to renege.
• If the two aren’t allowed to communicate, it would seem that one of these
outcomes is likely to be a focal point, or a solution people are naturally
drawn to without communication: the one where both choose Powercast
and earn a payoff of $50 billion each. There will, however, be exceptions.*
If one more change is made to the game, settling on the lower-payoff
Nash equilibrium outcome actually becomes the norm.

THE STAG HUNT


• This next variation on the coordination game is called the stag hunt.
It was first described by Jean-Jacques Rousseau, the 18th-century
French philosopher.
• In this game, you and another player simultaneously decide whether
to hunt the stag or a hare. Either of you, on your own, could catch
a hare with certainty. Hunting the stag, though, is more challenging
and requires teamwork.

* Sony’s Betamax (Beta) video cassette recorders hit the market


a year before rival VHS recorders. Beta offered higher resolution,
better sound quality, and a more stable picture. It was the superior
technology, yet VHS won the video format war.
37
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games

• If you both hunt the stag, you’ll kill it and split the meat. But if you hunt
the stag while the other player hunts a hare, you’ll go hungry. Both
players’ two strategies are to hunt the stag or a hare, and payoffs are
measured in terms of the meat you take home.
• Like all of the coordination games discussed so far, this one has two pure-
strategy Nash equilibria: one where you both play stag and one where
you both play hare. And like the assurance game, one of the equilibria
outcomes offers both players a higher payoff. Game theorists call this
payoff dominance.
• But unlike the assurance game, one of the two strategies in this game
eliminates all risk. If you choose to hunt hare, you earn a payoff of 1 no
matter what the other player chooses to do. It doesn’t matter if she hunts
the stag or a hare. You go home with hare either way. The equilibrium
where you both hunt a hare risk dominates the equilibrium where you
both hunt the stag because the hare/hare equilibrium is less risky.
38 38
Lesson 4 Reaching Consensus: Coordination Games

SOLUTION
• One Nash equilibrium offers a higher payoff, and the other offers lower
risk. Knowing this, which strategy would you choose to play?
• When experimental economists have asked people to play games like this
one in the laboratory, they’ve found that the more experience the players
have, the more likely they are to choose lower risk over higher reward.
• The experimental research in this area has tended to focus on a version
of the stag hunt called the minimum effort game. As an example,
participants might have to choose to contribute between one and seven
units toward a communal fund.
• Each player’s payoff would equal $2 times the minimum contribution
made by any member of the group, minus $1 times the amount they
contributed themselves.
• If everyone in the group contributes the maximum of seven units, each
participant receives a payoff of $7:
$2 × 7 − $1 × 7 = $7.
• This is a Nash equilibrium because if you believe everyone else will
contribute seven to the fund, you have no incentive to reduce your own
contribution. If, for example, you lowered your contribution from seven
to six, your payoff would fall from $7 to $6. That’s because your payoff is
$2 times the minimum contribution—which would now be six—minus
$1 times your own contribution—also six—equaling $6.
• This is just one of the game’s Nash equilibria. If you think everyone
else will contribute six, you have no incentive to raise your contribution
from six to seven. This would increase your cost without changing the
minimum contribution.
• You also have no incentive to lower your contribution from six to five.
This would lower your cost, but it would also lower the minimum
contribution, leaving you with a lower payoff. The same can be said
about everyone contributing five, four, three, two, or even one. So this
game has seven pure-strategy Nash equilibria.

39
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games

FOCAL POINT
• Which, if any, of these equilibria is a focal point?
• The one where everyone contributes the full seven units to the group
fund seems conspicuous because it offers the highest payoff. It’s the
payoff dominant equilibrium. But there’s also something conspicuous
about the equilibrium where everyone contributes just one unit.
• If you contribute seven to the group fund, there’s a chance everyone else
will be just as generous as you, in which case you’ll earn a $7 payoff. But
there’s also a chance someone in the group will contribute just one. In
that case, your payoff would be $2 times the minimum contribution—
just one unit, in this case—minus $1 times your contribution of seven.
You’d earn a payoff of −$5.
• But if you contribute just one unit to the group fund, you know what
your payoff will be: $1. Contributing just one unit to the group fund
eliminates all risk, so everyone making the minimum contribution is the
risk dominant equilibrium.

REAL-LIFE APPLICATION
• The earliest study to look at this game was by John Van Huyck and
others. They found that the first time people play this game, most are
generous. Most people have enough faith in their fellow participants that
they’re willing to make a large contribution to the group fund.
• But it’s the minimum contribution that determines everyone’s payoff.
The minimum contribution in the first round is typically only about
two units. This means the people who were willing to take a chance and
make the maximum contribution end up losing money.
• Not surprisingly, most people contribute less and less each round,
until by the 10th round, the majority of people make the smallest
possible contribution.
• In a sense, this outcome is frustrating in the same way the prisoner’s
dilemma is frustrating. Everyone could enjoy a better payoff if everyone
would play seven instead of one.
40 40
Lesson 4 Reaching Consensus: Coordination Games

• But in another sense, this outcome is even more frustrating than the
prisoner’s dilemma. Unlike in the prisoner’s dilemma game, the outcome
where everyone enjoys a better payoff is a Nash equilibrium outcome.
If we could only get there, no one would have an incentive to move
away from it.

READINGS
Dugar, “Nonmonetary Sanctions and Rewards in an Experimental
Coordination Game.”
Schelling, The Strategy of Conflict.
Van Huyck, Battalio, and Beil, “Tacit Coordination Games,
Strategic Uncertainty, and Coordination Failure.”

QUESTIONS
1. What kind of coordination game is this: pure coordination, assurance,
or stag hunt? Assume a higher payoff is preferred to a lower one.
Colin
Red Blue

Red 2, 2 0, 0
Rose
Blue 0, 0 5, 5

2 . Fill in the blanks in this payoff matrix so it’s clear this is a stag hunt.
Assume a higher payoff is preferred to a lower one.

Colin

Dinner and
Watch TV
a movie

Watch TV _____, _____ 1, 0


Rose
Dinner and
0, 1 3, 3
a movie

41
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

LESSON 5

RUN OR
PASS? GAMES
WITH MIXED
STRATEGIES

42 42
Lesson 5 Run or Pass? Games with Mixed Strategies

W
hen you used best-response analysis to find
a game’s pure-strategy Nash equilibria in previous
lessons, there was occasionally a third equilibrium
hiding in the background—a mixed-strategy Nash equilibrium
that involved players randomizing between their pure
strategies. This lesson focuses on games with mixed strategies
and why it doesn’t always pay to be predictable.

RUN OR PASS
• Run or Pass, a game developed by Matt Rousu, is a dramatically
simplified version of American football. It’s not necessary to have any
knowledge of football beyond that gaining more yards is good for the
offense and bad for the defense.
• Imagine that you are your team’s offensive coordinator, which means
that you call the plays for your team’s offense, or the players who have
the ball and try to move it down the field. Your opponent is the other
team’s defensive coordinator, which means he calls the plays for his
team’s defense, or the players who are trying to stop yours from moving
the ball down the field.
• Because this is a simplified version of football, you will each choose
between just two strategies: You have to choose whether your team will
run or pass, and your opponent has to simultaneously decide whether
his team will prepare to stop the run or stop the pass.

PAYOFFS
• Your objective is to gain yards, and your opponent’s objective is to stop
you from doing that. Both of your payoffs can therefore be measured in
yards, where gaining yards is a good thing for you and an equally bad
thing for your opponent.

43
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

• What happens if you decide your team should pass while your opponent
simultaneously decides his team should prepare to stop the pass? You
don’t do too well—you gain just two yards, and the other team gives up
only two yards. This is because he correctly anticipated that your team
was going to pass.
• But what happens if you decide your team should pass while he decides
his team should prepare to stop the run? This time, you do much better:
You gain seven yards, which means the other team gives up seven yards.
• If you decide to run the ball while your opponent prepares to stop the
run, you gain just one yard. But if he prepares his team to stop the pass
on a play when you’ve decided to run, you gain six yards.
• This is a constant-sum game, or a zero-sum game, because the payoffs
in each cell of the payoff matrix add up to zero. This is also a game of
pure conflict: Every yard you gain is a yard your opponent gives up. You
can’t do well unless he does poorly.

44 44
Lesson 5 Run or Pass? Games with Mixed Strategies

BEST-RESPONSE ANALYSIS
• You can use best-response analysis to figure out what you should do given
what you think your opponent is going to do. If he’s going to prepare to
stop the pass, you’d gain two yards if you pass but six yards if you run,
so your best response is to run. If he’s going to prepare to stop the run,
you’d gain seven yards if you pass and only one if you run, so your best
response is to pass.
• Whatever you think your opponent is going to prepare for, you want to
do the opposite. And he wants to be ready to stop whatever he thinks
you’re going to do.
• In every other game discussed so far, there was at least one pure-strategy
Nash equilibrium. Here, for the first time, that’s not the case. That
doesn’t mean there’s no equilibrium.* Instead, the Nash equilibrium in
this game will involve mixed strategies. Both you and your opponent
need to be unpredictable. You’ll also need to be deliberate about the way
you randomize, or your opponent can take advantage of you.

OFFENSIVE STRATEGY
• If your opponent never prepares to stop the pass, you’ll gain seven yards
if you pass. And if he always prepares to stop the pass, you’ll gain just
two yards if you pass.
• But what if he’s less predictable? Suppose, for example, he prepares to
stop the pass 60% of the time. There’s a 60% chance he’ll stop the pass,
in which case you’ll gain just two yards. And there’s a 40% chance he’ll
stop the run, in which case you’ll gain seven yards. So your expected
payoff is four yards:
0.6 × 2 + 0.4 × 7 = 4.

* John Nash won the Nobel Prize in Economics for proving that
every finite game—meaning a finite number of players choosing
between a finite number of pure strategies—will have at least one
Nash equilibrium.
45
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

Figure 1

• Connecting these points on a graph shows the number of yards you


expect to gain if you pass as a function of the probability your opponent
stops the pass (Figure 1). Not surprisingly, passing works best for you
when your opponent is never prepared for it, and passing works worst
for you when he’s always prepared for it.
• Now, suppose you always run and your opponent always prepares to stop
the run. You’ll gain just one yard if you run. But if he always prepares
to stop the pass, you’ll gain six yards if you run.
• What if you always run and your opponent prepares to stop the pass 60%
of the time? There’s a 60% chance he prepares to stop the pass, in which
case you’ll gain six yards. And there’s a 40% chance he’ll stop the run,
in which case you’ll gain one yard. Your expected payoff is four yards:
0.6 × 6 + 0.4 × 1 = 4.

46 46
Lesson 5 Run or Pass? Games with Mixed Strategies

Figure 2

• Connecting these points on a new line on the graph shows the number of
yards you gain if you run as a function of the probability your opponent
prepares to stop the pass (Figure 2).
• With this graph, you can determine your best response given the
probability with which you think your opponent is going to prepare to
stop the pass.
• If you think your opponent is going to stop the pass less than 60% of
the time, your best response is to always pass (Figure 3). If, on the other
hand, you think he’s going to stop the pass more than 60% of the time,
your best response is to always run (Figure 4). The only time you’re
indifferent between running and passing is when he stops the pass 60%
of the time.

47
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

Figure 3

Figure 4

48 48
Lesson 5 Run or Pass? Games with Mixed Strategies

DEFENSIVE STRATEGY
• Turning now to the perspective of the defensive coordinator, the best
outcome for him is where you’re indifferent between running and
passing, which is the point on the graph where the two lines intersect.
That means his optimal strategy is to stop the pass 60% of the time and
stop the run the remaining 40% of the time.
• But it’s not enough to be unpredictable—he needs to be unpredictable
in a carefully calibrated way. If he prepares to stop the pass a third of
the time and stop the run the other two-thirds of the time, you can
exploit him by always passing, in which case you gain more yards per
play on average.
• As the defensive coordinator, he chooses the probability with which he
plays his strategies not so that it leaves him indifferent between his two
strategies, but so that it leaves you indifferent between yours. Again,
that’s because your expected payoffs will never be so low—and, by
extension, his will never be so high—as when you’re indifferent between
your two pure strategies.

SOLUTION
• You can quickly find the solution with a few lines of algebra. Remember
that, as the offensive coordinator, you’re looking to minimize your
opponent’s expected payoff, and you’ll do that by mixing between
your two strategies such that he’s indifferent between his. To do this
algebraically, we’ll say that p is the probability you pass, and (1 − p) is
the probability you run.
• If you want to choose p to leave your opponent indifferent between
stopping the pass and stopping the run, that means you’re going to
need to choose p such that his expected payoff from stopping the pass
equals his expected payoff from stopping the run. Each of these expected
payoffs can be written as a function of p.

49
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

• The defense’s expected payoff from stopping the pass is


.
• Focusing on the term on the left, π represents the payoff, D represents the
defense, SP represents stopping the pass, and E represents expected value.
• Turning to the right-hand side of the equation, you have the probability
you pass times your opponent’s payoff if you pass when he’s prepared to
stop the pass, plus the probability you run times his payoff if you run
when he’s prepared to stop the pass.
• Multiplying through and combining the terms, this simplifies to

.
• The defense’s expected payoff from stopping the run is similar:

.
• Here, SP is replaced with SR, which represents stopping the run, and the
payoffs are replaced with those from stopping the run.
• This simplifies to

.
• To find the value of p that leaves your opponent indifferent between
stopping the pass and stopping the run, equate the two expected payoffs
like this:

.
• Next, rearrange and solve for p. Start by adding 6p to both sides of
the equation.


• Then, add 6 to both sides.


50 50
Lesson 5 Run or Pass? Games with Mixed Strategies

• Finally, divide both sides of the equation by 10.


• This means you should pass with a probability of 1/2, or 50% of the
time, and run the other 50% of the time. And this leaves your opponent
indifferent between stopping the run and stopping the pass. Either way,
he expects to give up four yards per play.
• This means the mixed-strategy Nash equilibrium for this game is where
you, as the offense, pass 50% of the time and run 50% of the time, while
the defense stops the pass 60% of the time and stops the run 40% of
the time.

GAME VARIATIONS
• What would change if the offense gained 17 yards from a successful
passing play rather than just seven? The new mixed-strategy Nash
equilibrium would be where the offense passes just 25% of the time and
the defense prepares to stop the pass 80% of the time.
• Interestingly, the number of yards you gain from a successful pass has
more than doubled, but the probability you pass has fallen from one-half
to one-quarter. How can that be?
• Remember, you do best when your opponent is indifferent between
stopping the pass and stopping the run. If he gives up twice as many
yards when you have a successful play, he’ll only be indifferent between
his two strategies if you rarely pass.
• And what if you gained 47 yards from a successful passing play? In that
case, the mixed-strategy Nash equilibrium is when the offense passes just
10% of the time and the defense prepares to stop the pass 92% of the
time. If your opponent expects you to pass any more often than that, he’s
going to prepare to stop the pass 100% of the time, in which case you’ll
never have one of those successful 47-yard passing plays.

51
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies

Is this the way things work in real football? Surely,


teams with the best passing game do the most passing.

As it turns out, they do not. If you look at passer


rating—a measure of quarterback effectiveness
based on things like pass completions, yards, and
touchdowns—teams with a higher passer rating tend to
pass less, just as game theory would predict.

READINGS
Leeson, “Oracles.”
Palacios-Huerta, “Professionals Play Minimax.”
Reiley, Urbancic, and Walker, “Stripped-Down Poker.”
Rousu, “Run or Pass?”
Walker and Wooders, “Minimax Play at Wimbledon.”

QUESTIONS
1. The game of chicken, introduced in lesson 3, has two pure-strategy
Nash equilibria, but it also has a mixed-strategy Nash equilibrium.
Find the mixed-strategy Nash equilibrium for the following game
of chicken.

Colin

Straight Swerve

Straight −1, −1 2, 0
Rose
Swerve 0, 2 1, 1

52 52
Lesson 5 Run or Pass? Games with Mixed Strategies

2 . Imagine a simplified version of baseball where the pitcher simply


chooses between throwing a fastball or a changeup, while the batter
chooses to be ready for a fastball or ready for a changeup. Assuming
a higher payoff is preferred to a lower one, convince yourself that this
game has no pure-strategy Nash equilibrium, and then find its mixed-
strategy Nash equilibrium.

Batter

Ready for Ready for


a fastball a changeup

Throw
−1, 1 1, −1
a fastball
Pitcher
Throw
1, −1 −2, 2
a changeup

53
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games

LESSON 6

LET’S TAKE
TURNS:
SEQUENTIAL-
MOVE GAMES

54 54
Lesson 6 Let’s Take Turns: Sequential-Move Games

A
ll of the games discussed up to this point have been
simultaneous-move games, meaning players had to
make their decisions simultaneously, without knowing
what the other player had done. This lesson focuses on
sequential-move games, where players take turns.

NIM
• There are countless variations of the game nim, but in this example,
imagine there’s a pile of 20 beads between you and another player. You
take turns removing between one and six beads from the pile with the
understanding that the player who takes the last bead wins.
• If you get to go first, how many beads should you take to guarantee
that you win? Suppose you take six beads, leaving 14 in the pile. Your
opponent then takes five beads, leaving nine. You go again and take
two. There are seven beads left at this point, and it’s clear that you are
sure to win.
• No matter how many beads your opponent takes in her next turn, there
will be six or fewer left in the pile, meaning you’ll be able to take them
all on your third turn and win the game.
• You can guarantee a win by leaving your opponent with a pile of 14 beads
after your first turn, which ensures that no matter how many she takes,
you can take just enough to leave her with a pile of seven on her next
turn. And you can be sure to leave your opponent with a pile of 14 beads
by taking six at the beginning.

STRATEGY
• Nim illustrates two important concepts. The first is backward induction,
which is when you start by thinking about the last round and then
work your way backward to the first round. This is absolutely central to
finding the solutions to the sequential-move games in this lesson.

55
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games

• You know that in order for you to win, no more than six beads must
remain in the pile in the last round. With that in mind, you know
you need to leave your opponent with seven beads in the second-to-last
round. And you know that you can leave her with seven beads in the
second-to-last round by leaving her with 14 in the preceding round.
• This game also has a clear first-mover advantage, which means that going
first improves your payoff. In sequential-move games, going first will
often give you an advantage, but not always. In fact, not all versions of
nim have a first-mover advantage.
• Imagine the game begins with 21 beads in the pile instead of 20. If you
go first, no matter how many beads you take on your first turn, your
opponent will take just enough on her first turn to leave a pile of 14 beads.
And no matter how many you take on your second turn, she’ll take just
enough to leave seven, guaranteeing her the win. The game was only
changed slightly, but it was enough to give a second-mover advantage.

SEQUENTIAL-MOVE GAMES IN POLITICS


• This example comes from Games of Strategy by Dixit, Skeath, and Reiley.
Think of the lawmaking process as a sequential-move game with two
players: Congress and the president. Congress has to decide which, if
either, of two provisions to include in a bill they’ll send to the president.
Assume Congress loves provision A and hates provision B, while the
president loves B and hates A.
• Congress has four potential courses of action to choose from. It can pass
a bill that includes only A, the provision it loves; a bill that includes only
B, the provision it hates; a compromise bill that includes both A and B;
or nothing.
• Congress moves first in this game, deciding whether to pass a bill and,
if so, whether that bill should include provision A, provision B, or both.
If Congress passes the bill, the president moves second, deciding whether
to sign the bill into law or veto it.

56 56
Lesson 6 Let’s Take Turns: Sequential-Move Games

GAME TREE
• The sequential nature of this game can be captured using a game tree,
which has branches representing each of the player’s potential decisions.
• The initial decision node* represents Congress. It has four branches
depicting Congress’s four potential choices: pass A, pass B, pass
a combination of A and B, or do nothing.
• If Congress passes nothing, the game ends at a terminal node,** where
both Congress and the president earn a payoff of 2. If Congress actually
passes one of the bills, the president decides whether to sign it or veto it.
Whatever he decides, the game ends with a terminal node.

* Game theorists often draw this initial decision node as an open


circle so it’s easier to find.
** You can tell terminal nodes apart from other decision nodes by
looking for the payoffs associated with terminal nodes, put in
parentheses. The first mover’s payoff is written first, followed by
the second mover’s, and so on.
57
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games

• For example, if Congress passes a bill including only provision A and


the president signs it, Congress earns a payoff of 4—the best possible
payoff from its standpoint—and the president earns a payoff of 1—the
worse possible payoff from his standpoint.
• If the president vetoes that bill instead, and Congress doesn’t have
the votes to override the veto, nothing will get signed into law. Both
Congress and the president earn a payoff of 2.

SOLUTION
• The solution to this game can be found by using backward induction.
Start at the final decisions—those that the president makes—and work
back to the initial decision.
• If Congress passes A, the president earns a payoff of 1 if he signs the
bill and 2 if he vetoes it. Because 2 is better than 1, the president vetoes
the bill.
• If Congress passes B, the president earns a payoff of 4 if he signs the
bill and 2 if he vetoes it. Because 4 is better than 2, the president signs
the bill.
• If Congress passes a combination of A and B, the president earns a payoff
of 3 if he signs the bill and 2 if he vetoes it. Because 3 is better than 2,
the president signs the bill.
• Now that you know what the president will do in each scenario, you
can return back to the initial decision node. If Congress passes A while
the president vetoes it, Congress will have a payoff of 2. If it passes B
while the president signs it, Congress gets a payoff of just 1. If it passes
A and B while the president signs it, Congress earns a payoff of 3. And
if Congress does nothing, the game ends immediately, with Congress
earning a payoff of 2.
• Of these four possibilities, passing the compromise bill with both
provisions A and B offers Congress the highest payoff.

58 58
Lesson 6 Let’s Take Turns: Sequential-Move Games

EQUILIBRIUM PATH OF PLAY


• The game’s equilibrium is described in terms of strategies, not payoffs.
In sequential-move games like this one, a player’s strategy tells you
what that player would do at every decision node she might conceivably
find herself at. You should think of a strategy as a comprehensive set of
instructions detailing what a player would do in any possible state
of the world.
• In this game, Congress only has one decision node—the initial decision
node—so it has only four potential strategies: pass A, pass B, pass A and
B, or do nothing.
• But the president’s strategy doesn’t just tell us what he’ll do when
Congress passes the bill including both A and B. It also tells us what
he’ll do if Congress passes only A and if it passes only B.
• Since this game is solved using backward induction, you can’t know what
Congress will do at the initial decision node until you determine what
the president will do at each of his three decision nodes.
• If the president were playing a different strategy—for example,
signing every bill that comes across his desk—Congress would behave
differently. In this case, it would pass a bill containing only A, its
favorite provision.
• With all of this in mind, the equilibrium strategies for this game are for
Congress to pass the combination of A and B and the president to veto
A, sign B, and sign the combination of A and B.

THE ULTIMATUM GAME


• The next game is taken from behavioral economics, which is often
described as the intersection between economics and psychology.
This game, called the ultimatum game, is a classic in the field.

59
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games

• Two anonymous strangers sit in separate laboratories. One is assigned


the role of allocator, and the other is assigned the role of decider.
The allocator moves first, deciding how to divide $10 between herself
and the decider. The decider moves second, deciding whether to accept
the allocator’s offer. If he accepts the offer, both players receive the money
as proposed. If he rejects the offer, both go home emptyhanded.
• In this version of the game, the allocator has just three choices:
ʶ An even split, where the allocator keeps $5 for herself and offers $5
to the decider
ʶ An uneven split, where the allocator keeps $7.50 for herself and offers
just $2.50 to the decider
ʶ A grossly uneven split, where the allocator keeps $9.50 for herself and
offers the decider a paltry $0.50
• Any time the decider rejects the offer, both players receive a payoff of 0.
Any time the decider accepts the offer, each player receives the payoffs
the allocator proposed.

60 60
Lesson 6 Let’s Take Turns: Sequential-Move Games

SOLUTION
• Once again, you can use backward induction to find the equilibrium.
• If the allocator offers an even split, the decider earns a payoff of $5 if he
accepts the offer and nothing if he rejects it, so he should accept.
• If the allocator offers an uneven split, the decider earns a payoff of
$2.50 if he accepts the offer and nothing if he rejects it. Again, he
should accept.
• If the allocator offers a grossly uneven split, the decider earns a payoff of
$0.50 if he accepts the offer and nothing if he rejects it. Fifty cents isn’t
much, but it’s better than nothing, so he should still accept.
• Now, knowing that the decider will accept any offer she makes, the
allocator should choose to make a grossly uneven offer to maximize her
own payoff.
• This means the equilibrium outcome is the one where the allocator
earns a payoff of $9.50 and the decider earns just $0.50. The allocator’s
equilibrium strategy is to choose the grossly uneven split, and the
decider’s equilibrium strategy is to always accept.

REAL-LIFE APPLICATION
• Do participants in a laboratory actually behave the way game
theory predicts?
• In the classic version of the game, the most common division proposed
by allocators is an even split, and most deciders reject an offer
of $2.50. They’d rather walk away with nothing than just one-
quarter of the pie.
• This could be because the payoffs don’t capture everything the players
care about. In some cases, that may simply be money. But it’s possible
the allocator also cares about fairness and the decider cares about not

61
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games

feeling like he’s been stiffed. This changes the payoffs so that the decider
is less likely to accept an unfair offer—at least, as long as the stakes are
low enough.***

READINGS
Andersen, Ertaç, Gneezy, Hoffman, and List, “Stakes Matter in
Ultimatum Games.”
Kahneman, Knetsch, and Thaler, “Fairness and the Assumptions of
Economics.”
Leeson, “Trading with Bandits.”
Thaler, “The Ultimatum Game.”

QUESTIONS
1. Consider a variation on the veto example where the president has line-
item veto powers, meaning he can sign into law only the provisions
of the bill he likes while vetoing others. In this version of the game,
Congress views both A and B becoming law as worse than passing
nothing. Use this new payoff ranking to find the equilibrium for this
game, first without the line-item veto and then with it.
Outcome Congress President
A becomes law 4 1
B becomes law 1 4
Both A and B become law 2 3
Congress passes nothing 3 2

*** In a 2011 paper titled “Stakes Matter in Ultimatum Games,” the


authors conducted the game in poor villages in northeast India.
The amount allocators were asked to divide varied, but it was
sometimes more than what a typical worker in the area earned
in nine months. When the stakes were this high, the average
allocator offered the decider just 12%, and roughly 95% of deciders
accepted this uneven offer.
62 62
Lesson 6 Let’s Take Turns: Sequential-Move Games

Game tree without line-item veto

Game tree with line-item veto

63
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t

LESSON 7

WHEN
BACKWARD
INDUCTION
WORKS—AND
DOESN’T

64 64
Lesson 7When Backward Induction Works—and Doesn’t

I
n some games, the solution found using backward
induction seems so startlingly counterintuitive that you’ll
have to question whether any rational person would really
play the way backward induction predicts he would. As a test,
this lesson explores whether participants in a laboratory play
the way game theory predicts or behave in a way that’s more
consistent with common sense.

CENTIPEDE
• The first example is a classic called centipede. In this version of the game,
there are six rounds. In each round, one of two players decides whether
to stop or to let the game continue.
• Let’s say you go first. If you choose to stop the game after the first round,
you get $0.40 and your opponent gets $0.10. If you choose to let the
game continue, your opponent decides in the second round. If he chooses
to stop the game, he gets $0.80 and you get $0.20. If you choose to let
the game continue, you decide in the third round, and so on.
• Looking at the payoffs, the important thing to notice is that as you move
from one round to the next, your combined payoff keeps doubling. But
the way the payoffs are divided keeps alternating between an 80/20 split
favoring you and a 20/80 split favoring your opponent.

65
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t

SOLUTION
• This game can be solved using backward induction, starting with the
last decision and working back to the first.
• If your opponent, as the second mover, finds himself at the game’s last
decision node, he can stop the game and earn a $12.80 payoff or let
the game continue and earn just $6.40. Assuming the only thing he
cares about is the money he’ll receive, his choice is easy: He should stop
the game.
• Next, take a step backward and think about what you should do in the
fifth round. You can stop the game and earn a payoff of $6.40, or you can
let the game continue knowing that your opponent will stop the game
in the next round, in which case you’ll earn $3.20. Assuming the only
thing you care about is the amount of money you receive, your choice is
easy: You should stop the game.
• In the fourth round, your opponent can stop the game and earn $3.20,
or he can let the game continue knowing you’ll stop it in the next round,
in which case he’ll earn $1.60. He should stop the game.
• By similar reasoning, you should stop the game in the third round, your
opponent should stop the game in the second round, and you should
stop the game in the first round. This means the equilibrium outcome
is the one where you earn just $0.40 and your opponent earns $0.10.
• Remember that in sequential games like this one, a strategy is a detailed
set of instructions describing what you’d do at every decision-making
point you might find yourself at. Your equilibrium strategy is to always
play stop, and your opponent’s equilibrium strategy is to always play stop.

CRITICIZING BACKWARD INDUCTION


• Centipede was first proposed by Robert Rosenthal as a thought
experiment meant to criticize backward induction. His version of the
game had 100 rounds rather than six, so the game tree looked like
a true centipede.

66 66
Lesson 7When Backward Induction Works—and Doesn’t

• If you were to make it to the last round of the game, you’d find yourself
dividing up billions of dollars. But, using backward induction, your
equilibrium strategy would be the same—always play stop. And once
again, the equilibrium outcome would be the one where you earn $0.40
and your opponent earns $0.10.
• Would rational players really play this way? Game theorists debated this
puzzle for more than a decade before Richard McKelvey and Thomas
Palfrey decided to have real people play the six-round version of the game
with real money at stake.
• The authors found that the game never ended in the first round, the way
backward induction would predict, but it also virtually never went all
the way to the end. The most common outcome was for participants to
stop the game in the fourth or fifth round.
• McKelvey and Palfrey proposed that some players care not only about
their own payoff but also the payoff of the other player.* And if you’re
interested in maximizing both payoffs, you want to see the game continue
all the way to the end.

ALTRUISM IN CENTIPEDE
• Suppose you’re playing the game again, and this time you’re purely
self-interested. If you think there’s a 5% chance your opponent is an
altruist, it might actually be rational for you to choose to continue the
game. You might be able to play all the way to the end, in which case
you earn a payoff of $25.60. Your opponent might stop the game in the
second round, in which case you’d earn just $0.20, but a 5% chance of
getting $25.60 more than offsets the risk of losing $0.20.

* The authors originally called players who behaved this way


irrationals, but early readers of the paper said that caring about
others doesn’t mean a person is irrational; it just means he’s
altruistic.
67
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t

• Now, assume that your opponent is also purely self-interested. He sees


that you chose to continue in the first round, but can he be absolutely
sure you’re an altruist? You might be a selfish player who’s bluffing. But
he might choose to continue anyway because if you’re really an altruist,
the game could continue all the way to the sixth round, at which point
he’d stop it and earn a payoff of $12.80. True, you might stop the game
in the third round, in which case he’d earn just $0.40 instead of $0.80.
But that’s a risk he’s willing to take.
• Though your opponent chooses to continue in the second round, there’s
still a chance he’s a selfish player who’s merely pretending to be an
altruist. And you might be somewhat more hesitant about continuing
in the third round because this time the upside is smaller—increasing
your payoff from $1.60 if you stop to $25.60 if you continue to the end.
And the downside is bigger: You’d now earn $0.80 less if your opponent
stops the game in the next round.
• Suppose you continue despite these concerns. Your opponent still can’t
be sure you’re an altruist. If you were, he’d be able to stop the game in
the sixth round and get a payoff of $12.80 if he continues in this round.
But if you stop the game in the next round, he’d be left with just $1.60
instead of the $3.20 he can collect if he stops the game now. Stopping
the game at this point was the most common outcome in McKelvey and
Palfrey’s experiment.

COMMON KNOWLEDGE
• From this example, you can see that it’s not enough that players be
rational and self-interested. In order to end up at the outcome backward
induction predicts—the one where you stop the game in the first round—
both players’ rationality and self-interest need to be common knowledge.
• If your rationality and self-interest aren’t common knowledge—if, for
example, each of you thinks there’s some small chance the other is an
altruist—the game shouldn’t end in the first round.

68 68
Lesson 7When Backward Induction Works—and Doesn’t

• But maybe not everybody is a rational game theorist who can instantly
solve this kind of game using backward induction. Maybe most
people look at this game and see the biggest payoffs are at the end,
so they realize the only way to get to the end is to continue in the
early rounds.
• It would be nice, then, if we could figure out whether people let the
game continue for several rounds due to a lack of common knowledge
of rationality and self-interest or due to a lack of the cognitive firepower
needed to solve the game.

CENTIPEDE WITH CHESS GRANDMASTERS


• This is what Ignacio Palacios-Huerta and Oscar Volij tried to do using
chess grandmasters, the highest title awarded to chess players. Mastering
chess requires backward-induction reasoning that’s much more
sophisticated than what’s required for centipede. A person who’s reached
the highest level of international chess play should, therefore, have no
problem using backward induction to solve the centipede game—and,
more importantly, this is common knowledge.
• When chess grandmasters play centipede against other chess players,
they stop the game in the first round every time, just as theory predicts.
But the authors of the study went on to compare the behavior of the
chess player with that of the college undergraduate—the workhorse of
experimental economics research.
• When a student was paired with a student, the game ended in the first
round just 3% of the time. When a student was paired with a chess player
and the student moved first, she was 10 times more likely to stop the
game in the first round. That is, when a student knew she’d been paired
with a highly rational player, she was much more likely to demonstrate
backward-induction reasoning.
• This suggests many of the students in this study understood backward
induction, but they just couldn’t always be sure the person they were
paired with understood it. When that uncertainty is eliminated—when
rationality is common knowledge—students behaved much more in
keeping with the prediction of backward induction.
69
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t

• With this same pairing, if the chess player moved first, he was only half
as likely to stop the game in the first round as compared to when he
played against another chess player. The chess player still understands
backward induction, but he can’t be sure the student he’s paired with
understands it.
• This is a curious result. Imagine you’re the student. The game is more
likely to continue—meaning you make more money—if the chess player
assumes you don’t understand backward induction. It doesn’t happen
often, but there are situations where you want to be underestimated.

READINGS
McKelvey and Palfrey, “An Experimental Study of the
Centipede Game.”
Palacios-Huerta and Volij, “Field Centipedes.”
Selten, “The Chain Store Paradox.”
Suri, “The Nukes of October.”

QUESTIONS
1. Assuming you’re an altruist and your opponent cares only about his
own monetary payoff, use backward induction to find the equilibrium
to the following version of centipede.

70 70
Lesson 7When Backward Induction Works—and Doesn’t

2 . In the classic version of the chain store game, a rational incumbent firm
has an incentive to respond passively in every city where a competitor
enters. Imagine the following variation on the game where the
incumbent derives so much satisfaction from acting tough that its
payoff from responding aggressively is greater than its payoff from
responding passively. Use backward induction to find the equilibrium
for this single-city version of the game.

71
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life

LESSON 8

ASYMMETRIC
INFORMATION
IN POKER
AND LIFE

72 72
Lesson 8 Asymmetric Information in Poker and Life

T
his lesson focuses on games where one person knows
something that others don’t. In some cases, like poker, this
can benefit the person who has the private information.
If you’re bluffing, for example, you don’t want your opponents
to know. But in other cases, like when you’re selling a nice used
car, you might wish others knew the private information—such
as just how good the car is.

STRIPPED-DOWN POKER
• The simple version of poker in this example was developed by David
Reiley, Michael Urbancic, and Mark Walker. It has just two players
and a deck of eight cards—four kings and four queens. You and your
opponent each put $1 into the pot. Your opponent is then dealt a card,
and she decides whether to fold or bet.
• If she folds, the game ends, and you collect the $2 pot, making you $1
richer. If she bets, she adds $1 to the pot, and you have to decide whether
to fold or call.
• If you fold, she collects the $3 pot, making her $1 richer. If you call, you
put $1 in the pot, and she has to show you her card.
• If it’s a king, she wins and collects the $4 pot, making her $2 richer. If
it’s a queen, you win and collect the $4 pot, making you $2 richer.
• Imagine a game where your opponent is dealt a card and decides to bet.
You now have to decide whether to fold or call. Your opponent clearly
knows something you don’t—she knows if she’s holding the winning
card. She could have bet because she knows she has the winning hand,
or she could be bluffing, hoping you’ll fold.
• If you choose to fold, the game ends and she collects the $3 pot, making
her $1 richer. But if you call, you put in another dollar, and she then
shows you her card. It’s a king, so she wins and collects the $4 pot,
making her $2 richer.

73
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life

• The deck isn’t stacked in your opponent’s favor, but the game still isn’t
fair, because it’s a game of asymmetric information. Your opponent
knows something you don’t know, which gives her an edge.

GAME TREE
• To see why your opponent has an edge, you can draw a game tree.
• The first mover in this game isn’t your opponent; rather, it’s nature.
When game theorists talk about nature making a move, they mean the
game has an element of chance. In this game, that means nature decides
whether your opponent is dealt a king or a queen.
• Once nature moves, your opponent sees the card and decides whether
to fold or bet. No matter which card she’s dealt, if she folds, the game
ends with her earing a payoff of −1 and you earning a payoff of 1. And
if she bets, you need to decide whether to fold or call.
• If you fold, the game ends with her earning a payoff of 1 and you earning
a payoff of −1. And that’s true no matter which card she has in her hand.

74 74
Lesson 8 Asymmetric Information in Poker and Life

• The game is most interesting when your opponent bets and you call.
Only in that case does she show you her card. If she has the king, the
game ends with her earning a payoff of 2 and you earning a payoff of
−2. If she has the queen, the game ends with her earning a payoff of −2
and you earning a payoff of 2.
• At this point, it would be tempting to use backward induction to find
the subgame perfect equilibrium, but there are two things that make
this impossible.
ʶ Nature moves first and at random. Because nature doesn’t care about
payoffs or strategies, you can’t use backward induction to decide what
nature should do in the first move.
ʶ You don’t know which decision node you’re at when you have to
decide whether to fold or call. Your opponent knows which card she
has, but you don’t.
• This information asymmetry changes the way the game tree is drawn.
In particular, we show that you don’t know which of the two decisions
you’re at by drawing them inside of one information set.*
• Since you have just the one information set, you can’t use backward
induction to say what you’d do if your opponent has a king and what
you’d do if she has a queen—only she knows that.

PAYOFFS
• To solve this game, you need to convert the game tree to a payoff
matrix. But first, you need to think about how many strategies each
player has.
• Because you only have one decision-making point, and you only have
two possible choices at that point, you only have two potential strategies:
fold or call.

* An information set is a group of decision nodes that are


indistinguishable to the player making the decision.
75
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life

• Things are more complicated for your opponent. She has two decision
nodes: the one where nature deals her a queen and the one where nature
deals her a king. At each of those decision nodes, she has two possible
choices: fold or bet. This means she has four potential strategies:
ʶ Always bet
ʶ Bet with a king and fold with a queen
ʶ Fold with a king and bet with a queen
ʶ Always fold
• This means you’ll have a 2×4 payoff matrix representing each of your
payoffs given your two potential strategies and your opponent’s four
potential strategies.
• The easiest row to fill out is the one where your opponent always folds.
It doesn’t matter whether you’d planned to fold or call, because you
never get to the point of making a decision. The game ends with your
opponent $1 poorer and you $1 richer.
• What if she always bets? If you fold, it doesn’t matter which card she was
dealt. Either way, the game ends with her $1 richer and you $1 poorer.
• If you call, you have to think about your expected payoffs. Half the
time, nature will have dealt your opponent a king. She’ll collect the
pot, meaning she’s $2 richer and you’re $2 poorer. The other half the
time, she’s dealt a queen. You collect the pot, meaning she’s $2 poorer
and you’re $2 richer.
• Your opponent’s expected payoff is the probability she’s dealt a king
times her payoff if she’s dealt a king, plus the probability that she’s dealt
a queen times her payoff if she’s dealt a queen:
1/2 × $2 + 1/2 × (−$2) = 0.
• That’s an expected payoff of 0. Your expected payoff would be the mirror
image—also 0. You can use similar expected-payoff calculations to fill
in the remaining four cells.

76 76
Lesson 8 Asymmetric Information in Poker and Life

BEST-RESPONSE ANALYSIS
• If you think your opponent is always going to bet, what’s your best
response? If you fold, you’re sure to earn a payoff of −1. If you call, you
earn an expected payoff of 0. Zero is greater than −1, so you should call.
• What if you think she’s going to bet with a king and fold with a queen?
You earn an expected payoff of −1/2 if you call and 0 if you fold, so you
should fold. If she’s only betting when she has the winning hand, it’s
not smart for you to call.
• What if she folds with a king and bets with a queen? This would be
a strange strategy: When she bets, she’s always bluffing. Clearly, you do
much better if you call.
• Finally, if she always folds, you’re indifferent between your two strategies
because no matter what you planned to do, the game ends with you
collecting the pot before you actually have to make a decision.

77
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life

• Now, turn to your opponent’s perspective. If she thinks you’re going to


call, her best response is to play honestly, betting when she has a king and
folding when she has a queen. If, on the other hand, she thinks you’re
going to fold, she should always bet regardless of the card she’s dealt.

SOLUTION
• At this point, it’s clear that there is no pure-strategy Nash equilibrium.
But this kind of game has to have at least one Nash equilibrium, so this
one must involve mixed strategies.
• Two of your opponent’s four strategies are dominated, meaning they
offer a worse payoff than another strategy. For example, no matter what
she thinks you’re going to do, folding with a king and betting with
a queen earns her a lower payoff than always betting. Similarly, no matter
what she thinks you’re going to do, always folding earns her a lower
payoff than always betting. We can confidently assume your opponent
will never play a dominated strategy.
• Using the same method you used in lesson 5, you can quickly show that
this game’s mixed-strategy Nash equilibrium is where your opponent
always bets one-third of the time and bets with a king and folds with
a queen the remaining two-thirds of the time, while you call two-thirds
of the time and fold the remaining one-third of the time.
• Calculating both players’ expected payoffs at the equilibrium will show
just how unfair this game is to you. Just like in lesson 5, your opponent
chooses the probabilities with which she plays her strategies to leave you
indifferent between yours. That means that your expected payoff if you
fold will be the same as if you call.
• Suppose you call. One-third of the time, your opponent always bets, in
which case your expected payoff is 0. The other two-thirds of the time, she
bets with a king and folds with a queen, in which case your payoff is
−1/2. That means your expected payoff is −1/3:
1/3 × 0 + 2/3 × (−1/2) = −1/3.

78 78
Lesson 8 Asymmetric Information in Poker and Life

• Your expected payoff if you fold is also −1/3. Either way, you expect to
lose an average of $0.33 each time you play a hand of this game because
your opponent has a critical piece of information that you don’t know.

LEMONS AND PLUMS


• In another example of asymmetric information, imagine a world with
just two types of used cars: lemons and plums. The plums might look
a little tired, but they’re reliable and will always get you where you
need to go. Lemons look just like plums, but they’re less reliable and will
occasionally break down.
• Let’s say another player owns a used car he’s interested in selling, and
you’re interested in buying a used car if the price is right. If you both know
whether the car is a lemon or a plum, you shouldn’t have too much trouble
finding a price you can agree on. If it’s a lemon, anything between $100
and $200 could work, and if it’s a plum, anything between $1,000 and
$1,100 could work.
• But what if he knows if it’s a lemon or a plum, and you don’t? There’s
not much to be gained from asking him what kind of car it is, because
he has an incentive to tell you it’s a plum no matter what.
• You could offer him $1,050—a fair price for a plum. If it is a plum, he’d
be willing to sell it to you at that price. But if it’s a lemon, he’d be thrilled
to sell it at that price: It’s 10 times the minimum he’d be willing to accept.

79
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life

• Is it wise to offer $1,050 if there’s a 50% chance the car is a plum and
50% chance it’s a lemon? In that case, there’s a 50% chance you’ll pay
a fair price for a plum, earning what can be called $50 in profit. But
there’s a 50% chance you’ll radically overpay for a lemon, earning you
a profit of −$850.
• What if you based your offer on the car’s expected value? If there’s a 50%
chance it’s a lemon and 50% chance it’s a plum, its expected value is 1/2
× $200 + 1/2 × $1,100, or $650.
• Would offering something a little less than that—say, $600—earn you
a modest profit? Think about how the seller would respond to a $600
offer if the car is a plum. In that case, he’s not willing to sell for less than
$1,000, so he’ll reject the offer. But if it’s a lemon, he’ll be delighted to
sell it for $600, and you’ll still be overpaying for a lemon.
• This leaves offering a lemon price, like $150. At that price, you know
you won’t be able to buy a plum, but you also know you won’t overpay
for a lemon. Asymmetric information destroys the market for plums,
leaving only a market for lemons.

SIGNALING
• This phenomenon was originally described by George Akerlof in his
paper “The Market for ‘Lemons.’” This work won Akerlof the Nobel
Prize in 2001, which he shared with Michael Spence, who showed that
one way to overcome asymmetric information is through signaling—in
this example, introducing a third party to inspect the used car.
• Rather than asking if the car is a lemon or a plum, you ask the seller if he’d
be willing to pay $20 to have the car inspected. If the car is a plum, he’d be
happy to pay for the inspection because he knows the car will pass. This will
allow him to sell it to you for a plum price, leaving both of you better off.
• If the car is a lemon, the seller won’t agree to pay for the inspection. It’s
better for him to save $20 and let you assume the worst—which happens
to be true. You can still find a price you both agree on, but it will be a much
lower price than he’d get if he could credibly signal that his car is a plum.

80 80
Lesson 8 Asymmetric Information in Poker and Life

READINGS
Akerlof, “The Market for ‘Lemons.’”
Caplan, Bryan. “Bryan Caplan—The Case Against Education.”
Reiley, Urbancic, and Walker, “Stripped-Down Poker.”

QUESTIONS
1. True or false: You can use backward induction to solve the following
version of stripped-down poker.

2 . True or false: Given the following table, if a potential buyer thinks


there’s a 50% chance a car is a lemon and a 50% chance it’s a plum,
she should be willing to pay up to $650 for it, since that’s the average
of $200 and $1,100.
Lemon
Plum
(bad) (good)

Lowest price a seller is willing to accept $100 $1,000

Highest price a buyer is willing to pay $200 $1,100

81
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium

LESSON 9

DIVIDE AND
CONQUER:
SEPARATING
EQUILIBRIUM

82 82
Lesson 9 Divide and Conquer: Separating Equilibrium

T
he previous lesson discussed games of asymmetric
information, where one player has some critical piece of
information that another player doesn’t. This lesson gives
two examples where the right set of incentives will motivate
the informed player to honestly reveal her private information.

GAME THEORY IN THE TRAVEL INDUSTRY


• Imagine you work for American Airlines. Most of the airline’s planes
have first-class and coach cabins. Passengers in both cabins reach their
destination at the same time, but first-class passengers enjoy more leg
room, more comfortable seats, complimentary drinks, and doting service
from the flight attendants.
• For this example, you can assume there are just two types of potential
travelers: business travelers and leisure travelers. The biggest difference
between them is that business travelers aren’t paying for their own
tickets, making them much less price sensitive.
• Assume a business traveler is willing to pay $2,000 for a seat in first class
and $1,000 × α for a seat in coach, where α is a coefficient that ranges
from 0 to 1 and indicates how comfortable things are in the coach cabin.
When α = 1, flying coach is quite pleasant. But when α = 0, flying coach
is an excruciating ordeal.
• A little more formally, you can write this as

.
• The B superscripts represent the business traveler, and the F and C
subscripts represent first class and coach.
• A business traveler’s payoff from flying in either cabin is the difference
between what she’s willing to pay and the price she actually pays. You
can see that here, where π stands for payoff and P is the ticket price:


83
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium

.
• Now, assume a leisure traveler—who, remember, is spending his own
money—is willing to pay $600 for a seat in first class and $500 × α for
a seat in coach. More formally, you can write his willingness to pay for
flying in the two cabins as

.
• The L superscripts represent the leisure traveler, and the F and C
subscripts again represent first class and coach.
• As with the business traveler, the leisure traveler’s payoff from flying in
either cabin is the difference between what he’s willing to pay and what
he actually pays.

SEPARATING EQUILIBRIUM
• As an executive at American Airlines, you’re all too aware that the airline
industry is heavily competitive, meaning you have no control over ticket
prices. Suppose the going price for a first-class ticket from New York
to Los Angeles is $1,600, and the price for a coach ticket on that same
flight is $300.
• If you could fill your plane with first-class passengers paying $1,600
each, you’d do it. Unfortunately, there aren’t enough people willing to
pay that much. That means you’re going to have to fill most of your seats
with passengers paying the much lower fare.
• While you don’t control ticket prices, one thing you do control is α. You
can lower α by installing less comfortable seats in coach, moving those
seats closer together, providing less-appetizing snacks, and limiting in-
flight entertainment options.

84 84
Lesson 9 Divide and Conquer: Separating Equilibrium

• What makes this game harder for you is that it’s a game of asymmetric
information. Travelers know their type—and, therefore, their willingness
to pay—but you do not. Your job, then, is to choose a value for α that
creates a separating equilibrium where business travelers truthfully
reveal their high willingness to pay by buying first-class tickets and
leisure travelers truthfully reveal their low willingness to pay by buying
coach tickets.
• This won’t be easy. Your choice of α will have to satisfy incentive
compatibility constraints and participation constraints for both types
of travelers. Incentive compatibility constraints are conditions that give
the informed player an incentive to truthfully reveal their type, while
participation constraints ensure that informed players’ payoffs from
playing the game are at least as high as the payoffs they’d receive if they
didn’t play.

INCENTIVE COMPATIBILITY CONSTRAINTS


• First, you have to find the values of α that satisfy the business traveler’s
incentive compatibility constraints. You want the business traveler to
reveal her high willingness to pay by choosing the more expensive first-
class ticket.
• To find out for what values of α a business traveler will prefer to fly first
class, use your payoff functions to set up an inequality such that the
business traveler earns a higher payoff from flying first class than from
flying coach:

.
• Remember, a business traveler’s payoff is just her willingness to pay
minus the price she has to pay:

85
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium

• Substituting in what you’ve assumed about willingness to pay and prices,


that gives you

.
• Rearranging to solve for α, you get

.
• This means that coach can’t be too nice, or business travelers will choose
to fly coach. If a seat in coach is nearly as nice as one in first class but
dramatically less expensive, even somebody on an expense account will
opt for coach.
• Next, you’ll do something similar for the leisure traveler. What has to
be true of α for him to truthfully reveal his low willingness to pay by
choosing the less expensive coach ticket?
• Again, use his payoff functions to set up an inequality such that he
prefers the less expensive coach ticket:

.
• As with the business traveler, the leisure traveler’s payoff is just his
willingness to pay minus the price he has to pay:

.
• Making a few substitutions gives you

.
• Rearranging, you get

.
• This might be surprising, since we said that α has to fall between 0 and
1. But what this result means is that given the prices of the two tickets,
there’s no realistic value of α that will motivate a leisure traveler to pay
for a first-class ticket.

86 86
Lesson 9 Divide and Conquer: Separating Equilibrium

PARTICIPATION CONSTRAINTS
• What’s more important from the leisure traveler’s perspective is the
participation constraint. If you make flying coach too miserable, the
leisure traveler will choose to not play the game—he’ll take the train,
or drive, or just stay home.
• Set up the leisure traveler’s participation constraint by comparing his
payoff from flying coach with his payoff if he chooses not to fly, which
we’ll say is 0:

.
• Again, his payoff is his willingness to pay minus the price he pays:
.
• Rearranging this inequality, you get
.
• This means that α must be greater than 0.6, or the leisure traveler will
stay home.
• The last step is to find the business traveler’s participation constraint.
What has to be true so the business traveler will prefer flying first class
to staying home?
• Her payoff from flying first class needs to be greater than 0. Her payoff
is her willingness to pay minus the price she pays, which is $2,000 −
$1,600, or $400. This will always be true in this example, so the business
traveler will be willing to play the game.

SOLUTION
• Now that you’ve done this work, you can answer the question of just how
bad flying coach has to be. In other words, for what values of α do you
get a separating equilibrium where business travelers reveal their high
willingness to pay by buying first-class tickets and leisure travelers reveal
their low willingness to pay by buying coach tickets?

87
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium

• In this example, α has to fall between 0.6—the minimum comfort level


to induce the leisure traveler to fly—and 0.7, which is the maximum
comfort level that keeps the business traveler from flying coach.
• In other words, α has to be carefully calibrated. You need the seats in
coach to be uncomfortable, but not too uncomfortable. You need the
snacks to be unsatisfying, but not too unsatisfying.
• If you get things just right, you’ll be able to overcome asymmetric
information. And if presented with the right incentives, passengers will
reveal their willingness to pay not by what they say but by what they do.

GAME THEORY IN DUELS


• Another example of a game of asymmetric information comes from
dueling. This example is based on a paper by Christopher Kingston
and Robert Wright titled “The Deadliest of Games: The Institution of
Dueling.” They argue that dueling wasn’t as much about preserving your
honor as it was about preserving your creditworthiness, which allowed
you to borrow from and lend to other honorable members of society.
• In their paper, Kingston and Wright develop a mathematical model with
an infinite number of players who play an infinitely repeated game. In
each round, two players are randomly matched and assigned the role of
either borrower or lender.
• One player can lend the other a fixed amount of money to finance a risky
project. It’s risky because the project fails with some small probability,
leaving both players with nothing. The project succeeds the rest of the
time, generating profits the players can share.
• This is a game of asymmetric information because while the borrower
knows whether the project failed, all the lender knows is whether he’s
been paid back as promised. If the borrower doesn’t pay him back, it
could be because the project failed, or it could be that he simply kept all
of the borrower’s money. This information makes it harder to sustain
cooperation unless there’s a way to compel the borrower to pay the lender
anytime a project is successful.
88 88
Lesson 9 Divide and Conquer: Separating Equilibrium

• The way to compel a borrower in this game is through a duel. A lender


who hasn’t been repaid has the option of challenging the borrower to a
duel, and indeed, a lender in this situation loses his honor if he doesn’t
challenge the borrower to a duel. On the other hand, a borrower who
doesn’t accept this challenge loses his honor. A dishonored player is seen
as uncreditworthy, so he cannot borrow. And he can no longer issue
challenges, so he shouldn’t lend.
• Given these assumptions, the authors find the following equilibrium:
ʶ Honorable lenders will only lend to honorable borrowers and will
challenge anyone who doesn’t repay them to a duel.
ʶ Honorable borrowers will repay loans when they can and will always
accept challenges from honorable lenders.
ʶ Dishonorable players will never make or repay loans and will never
issue or accept challenges.

PAYOFFS
• In order to understand why this is an equilibrium, it’s important to
understand the incentives borrowers and lenders face.
• Imagine you’re the lender and another player is the borrower. What’s
your expected payoff from issuing a loan, assuming both of you
behave honorably?
• Remember, there’s a small probability the project fails. In that case,
you lose the money you invested in the project, and if you want to
preserve your honor, you have to challenge the borrower to a duel. That’s
obviously costly—you might be killed, or you might have to live with
the fact that you’ve killed another person.
• Fortunately, there’s a much larger probability that the project succeeds,
in which case you get back the money you invested plus your share of
the profits.

89
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium

• From the borrower’s perspective, if Dueling has to be


the project fails, he doesn’t lose any costly enough—that is,
of his own money, but he has to dangerous enough—that
accept your challenge if he wants the borrower will repay
to preserve his honor. Again, that’s the lender whenever he
a costly proposition. can rather than keeping
• In the more likely event that his everything for himself.
project succeeds, he pays you back But it can’t be too costly,
the money you invested and a or the lender will never
issue a challenge and
share of the profits, keeping the
the borrower will never
rest of the profits for himself.
accept one. During the
• Additionally, this honorable Renaissance, for example,
behavior buys both of you it was initially acceptable
continued access to the credit to duel to the blood.
market. So long as a person has Eventually, though, people
always behaved honorably in the decided this wasn’t
past, he can continue to borrow sufficiently costly, so the
and lend. But if you lose your custom became to duel
honor, you lose access to the credit until one principal was
market permanently. Because incapacitated or until
this is an infinitely repeated the doctor in attendance
game, this means you miss out called a halt.
on an unending stream of future
investment opportunities.
• The important lesson to take away from this example is that dueling,
though it initially seems like utter madness, was an effective way to
solve the problem of asymmetric information. Fortunately, for those
interested in borrowing or lending today, there are better ways to deal
with asymmetric information. Applying for a mortgage may not be fun,
but at least there’s no chance of having to risk your life in a duel.

90 90
Lesson 9 Divide and Conquer: Separating Equilibrium

READINGS
Caplan, “Bryan Caplan—The Case Against Education.”
Leeson, “Ordeals.”

QUESTIONS
1. True, false, or uncertain: A more dangerous—and thus more costly—
duel will always be more effective at creating a separating equilibrium
where only men of honor participate in financial markets, since a more
dangerous duel will give borrowers a stronger incentive to pay back
their loans.
2 . Suppose Toyota discovers it’s cheaper to add a certain feature to every
version of its Camry sedan rather than make different versions of the
car, some with the feature and some without. Why might Toyota still
choose to reserve that feature for the more luxurious versions of the
Camry? Relate this to the airline example from this lesson.

91
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games

LESSON 10

GOING ONCE,
GOING TWICE:
AUCTIONS
AS GAMES

92 92
Lesson 10 Going Once, Going Twice: Auctions as Games

Y
ou’re probably familiar with fast-talking livestock
auctioneers, and there’s a good chance you’ve bought
something at auction on eBay. These are examples
most people imagine when they think of auctions. But auctions
are everywhere—indeed, you’re participating in them all the
time, often without even knowing.

SILENT AUCTIONS
• Most people take out a loan when they buy a new car. If a person stops
making payments on the loan, the lender will have the car repossessed.
In many cases, the car will then be sold at auction. Most of the buyers
will be used car dealers, but individuals are sometimes allowed to bid.
Suppose you find the car you have always wanted, so you decide to bid
in what turns out to be a silent auction.
• In a first-price sealed-bid auction, or simply a first-price auction, everyone
privately writes down a bid. The winner is the person who submits the
highest bid, and she pays a price equal to the bid she submitted.
• It’s a sealed-bid auction because the bids are private—you don’t get to see
your rival’s bid, and she doesn’t get to see yours. It’s a first-price auction
because the winner pays the highest bid, which you can think of as the
first price on the list if you were to rank the bids from highest to lowest.
• In a second-price sealed-bid auction, you and your rivals still write down
your bids privately, and the winner will still be the person who submits
the highest bid. The only difference is that the winner pays a price equal
to the second-highest bid submitted.
• The second-price auction is demand revealing, while the first-price
auction is not. When an auction is demand revealing, it’s in bidders’
best interest to bid their true willingness to pay.

SECOND-PRICE AUCTION PAYOFFS


• To find out why the second-price auction is demand revealing, you can
make a table that shows your payoffs.
93
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games

• An auction is like a game with a continuum of potential strategies that


correspond with the continuum of bids you could submit. For this
example, you can focus on three potential strategies:
ʶ Submit a bid equal to what you’re truly willing to pay, or your true
value, abbreviated as v
ʶ Submit a bid higher than your true value
ʶ Submit a bid lower than your true value
• What if the highest bid submitted by one of your rivals is higher than
even the high bid you’re considering? This is abbreviated as A. You lose
the auction no matter what you bid, so you earn a payoff of $0. You’re not
better off or worse off than you were before the auction.
• What if the highest bid submitted by one of your rivals is higher than
your true willingness to pay but less than the high bid you’re considering?
This is abbreviated as B. In this case, you lose the auction and earn a
$0 payoff if you bid truthfully or submit a low bid, but you win if you
submit your high bid.

94 94
Lesson 10 Going Once, Going Twice: Auctions as Games

ʶ This is a second-price auction, so you pay a price equal to the second-


highest bid submitted. In this case, that’s B, which is more than
what you were truly willing to pay. This is like paying $11 for $10.
Your payoff is negative, so you would have been better off if you’d
bid truthfully.
• And what if the highest bid submitted by one of your rivals is less
than your true willingness to pay but greater than the low bid you’re
considering? You win the auction if you bid truthfully or if you submit
a high bid.
ʶ In both of these cases, you pay a price equal to the second-highest bid
submitted, which is $C. Because this is less than what you’re truly
willing to pay, your payoff is positive—it’s like paying $9 for $10.
But if you submit a low bid, you lose the auction and miss out on the
deal. Again, you would have been better off if you’d bid truthfully.
• Finally, what if the highest bid submitted by one of your rivals is less than
even the low bid you’re considering? You win no matter what you bid.
ʶ In every case, you pay $D, earning a payoff of v – D, which is like
paying $8 for $10.

SOLUTION
• Now, you can use best-response analysis to see why the second-price
auction is demand revealing.
• If you think the highest rival bid will be A, you’re indifferent between
the three bids you’ve considered. You lose no matter what.
• If you think the highest rival bid will be B, your best response is to either
bid truthfully or to bid low, because bidding high leads to winning the
auction but overpaying.
• If you think the highest rival bid will be C, your best response is to either
bid truthfully or to bid high, because bidding low leads to missing out
on what would be a good deal.

95
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games

• Finally, if you think the highest rival bid will be D, you’re again
indifferent between the three bids you’ve considered. This time, you
win the auction no matter what.
• This means bidding truthfully is your weakly dominant strategy.
In general, this is a strategy that is sometimes your best response,
sometimes just as good as one or more alternative strategies, and never
worse than one of the other strategies.
• In this example, submitting a bid higher or lower than your true
willingness to pay will never help you, but it can hurt you. You can do
no better than to simply bid what you’re truly willing to pay. This is
because second-price auctions—and other demand-revealing auctions—
separate what you pay from what you say: The price you pay if you win
the auction isn’t determined by what you bid.

FIRST-PRICE AUCTION PAYOFFS


• Contrast this with the incentives you face in the more familiar first-price
sealed-bid auction. Remember, in this auction, you win if you submit
the highest bid, in which case you pay a price equal to what you bid.
• To see why this auction isn’t demand revealing, you can draw a graph.
Your expected payoff is on the vertical axis. This is equal to your
probability of winning times your payoff if you do win. And your bid
is on the horizontal axis. This allows you to show your expected payoff
as a function of your bid.
• The first bid you’ll consider is $0, the lowest bid you can submit. If you
truly value your dream car at $50,000, what would your payoff be if you
win the auction with a $0 bid?
ʶ Your payoff is the difference between what the car is worth to you
and the bid you submitted, so it would be $50,000. But your expected
payoff is the probability you win times the payoff you earn if you do
win. It’s safe to assume the probability you win with a $0 bid is 0.
That means your expected payoff is $0.

96 96
Lesson 10 Going Once, Going Twice: Auctions as Games

• The second bid you’ll consider is $50,000, which is your true willingness
to pay. If you value your dream car at $50,000, what would your payoff
be if you win the auction with a $50,000 bid?
ʶ You can’t be precise about your probability of winning the auction
now, but it’s surely greater than it would be if you bid $0. But your
payoff if you do win the auction is 0: There’s no profit in paying
$50,000 for something that’s worth $50,000. So your expected payoff
is again $0.
• What if you bid somewhere between $0 and $50,000?
ʶ Your probability of winning won’t be as high as it would be if you
bid $50,000, but it will still be positive. And your payoff if you do
win would also be positive, meaning your expected payoff would
be positive. Somewhere in that range, there’s an optimal bid—call
it b*—that strikes an optimal balance between your probability of
winning and your payoff if you do win.

97
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games

SOLUTION
• Drawing a smooth curve between these three points gives your expected
payoff as a function of your bid. And your optimal bid, b*, is right at
the top.
• Based on this graph, you don’t know exactly what your optimal bid
should be—that will depend on things like the number of people you’re
bidding against, your best guess as to what their bids might be, and
your own willingness to take risks—but you do know that b* is less
than $50,000.
• This means the first-price auction is not demand revealing. It will always
be in your best interest to submit a bid that’s less than your true value.
Since your bid determines the price you pay if you win the auction, this
time you have a strategic incentive to understate what you’re truly willing
to pay in the hope of getting a better deal.

OPEN OUTCRY AUCTIONS


• In open outcry auctions, an auctioneer calls out prices until the item
up for auction is sold. The most common open outcry auctions are the
English auction and the Dutch auction.
• In an English auction, the auctioneer starts at a low price and continues
to raise the price until no one is willing to bid any higher. At that point,
the bidder who agreed to the highest bid wins the auction and pays
a price equal to that bid.*
• Here, your weakly dominant strategy is to keep bidding up to your true
willingness to pay. It doesn’t make sense to stop bidding before that,
because you might miss out on a good deal. Likewise, it doesn’t make
sense to continue bidding beyond your true willingness to pay, because
you could end up paying more than you’re willing to. Like the second-
price sealed-bid auction, the English auction is demand revealing.

* This is how fine art is sold at Sotheby’s and how hogs are sold at
the county fair.
98 98
Lesson 10 Going Once, Going Twice: Auctions as Games

• In the Dutch auction, the auctioneer starts at a high price and gradually
lowers it until someone agrees to pay the last price announced.**
• Here, you don’t want to stop the auction at a bid equal to what you’re
truly willing to pay—that’s like paying $10 for $10. Instead, you should
wait until the price has fallen to something less than that. Just like the
first-price sealed-bid auction, the Dutch auction is not demand revealing.

AUCTIONS IN LIFE
• Life is full of auctions. Recognizing what kind of auction you’re
participating in—and the incentives that auction presents you and other
bidders with—has important implications for how you should bid.
• For example, think about making an offer on a house. On one side of
the transaction you have the seller, who is probably currently living
in the house. The bidders are potential
buyers making offers on the house. It’s
possible you’re the only bidder, or you Which auction is the
might find yourself bidding against best for the seller?
several other potential buyers—you According to the
don’t necessarily know. revenue equivalence
• This is most similar to the first- theorem, assuming
price auction because it isn’t demand certain conditions are
met, all four of the
revealing. And because your offer
auctions discussed
determines the price you pay if your
in this lesson should
offer is accepted, you have an incentive
bring in the same
to understate what you’d truly be
expected revenue.
willing to pay for the house.

** This is how fresh-cut flowers are sold in Amsterdam and how the
US government sells Treasury bonds.
99
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games

READINGS
“The Division Problem.”
Lucking-Reiley, “Using Field Experiments to Test Equivalence
between Auction Formats.”
McAfee, McMillan, and Wilkie, “The Greatest Auction in History.”
Sun, “Divide Your Rent Fairly.”

QUESTIONS
1. Explain why the following statement is false: A seller will always earn
more money from a first-price sealed-bid auction than from a second-
price sealed-bid auction. After all, in the second-price auction, the
winner only pays the second-highest bid.
2 . Imagine you’re bidding on a rare antique rug that’s being sold at an
estate auction. Ann, Bob, and Cindy are bidding against you. The
following table shows how much each of you is truly willing to pay
for the rug.

Bidder True WTP

You $4,000
Ann $3,000
Bob $2,000
Cindy $1,000

Complete the following table showing who wins and what he or she
pays in an English auction and a second-price sealed-bid auction.

Second-price
English auction
sealed-bid auction

Winner

Price paid

100 100
Lesson 11 Hidden Auctions: Common Value and All-Pay

LESSON 11

HIDDEN
AUCTIONS:
COMMON
VALUE AND
ALL-PAY

101
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay

T
his lesson explores why common-value goods are
different from private-value goods and why this
difference can lead to the winner’s curse, where winners
consistently end up overpaying. It also discusses the seemingly
strange but surprisingly common all-pay auction, where all
participants pay, even when they don’t win.

COMMON-VALUE GOODS
• Remember that the person who submits the highest bid in a second-price
sealed-bid auction wins and pays a price equal to the second-highest
bid submitted. If you’re bidding on a private-value good in one of these
auctions, such as a piece of furniture that might mean more to you than
to your rivals, you can do no better than to submit a bid equal to what
the good is truly worth to you.
• But imagine you’re bidding on a jar of pennies—or, more precisely, the
dollar value of the pennies inside the jar. This jar of pennies is an example
of a common-value good. It contains the same number of pennies no
matter who buys it.
• If you’re bidding on a common-value good in one of these second-
price sealed-bid auctions, bidding your best guess is not a good idea. If
everyone bids their best guess, the winner will be the person who had the
highest guess, which is almost surely an overestimate—as is the second-
highest guess, which determines the price the winner pays.
• Instead, you should bid as if you know that your guess is the highest
of any of the guesses. This is because the winner of a common-value
auction isn’t typically the person who made the most accurate guess; it’s
the person who made the worst guess, overestimating the value of the
pennies by a wider margin than anyone else.

COMMON-VALUE AUCTIONS
• Imagine a game where three players are each dealt one card from a deck
of cards. They then bid in a second-price auction for a pot of money,
the value of which is equal to the average value of the three cards drawn.
102 102
Lesson 11 Hidden Auctions: Common Value and All-Pay

• You can think of each player’s card as being a signal to the value of the
pot. Because of the way the game is constructed, the guesses must be
accurate on average. But unless all three cards are the same, the highest
card must be an overestimate of the prize’s value, and the lowest card
must be an underestimate.
• To make the arithmetic easier, assume the deck is made up of just 11
cards: a joker, an ace, and the numbered cards two through 10. The joker
has a value of 0, the ace has a value of 1, and each of the other cards is
simply equal to the value on its face.
• You can also assume the cards are drawn with replacement, meaning
your card gets shuffled back in the deck after you see it. So if you draw
a certain card, that doesn’t mean no one else can draw it.
• If the three players draw an ace, a six, and an eight, the value of the prize
is the sum of the three cards divided by 3, or the average of the cards.
This equals $5.
• Imagine you’re the player who is dealt an eight. That’s your signal, or
your best guess, to the value of the pot. The second-price auction is
demand revealing, but that doesn’t mean you want to bid $8. That’s
because if you win the auction, you know you had the highest signal, so
you know the pot is worth less than $8.
• To see this, suppose each player bids her signal. You win the auction and
pay a price equal to the second-highest bid—in this case, $6—meaning
you pay more than the $5 the pot is worth. That’s the winner’s curse.

THE WINNER’S CURSE


• Let’s be a bit more sophisticated about the way you think about the likely
value of the pot.
• You know your card is an eight. Given what you know about the deck
the cards are dealt from, the expected value of a draw from the deck is 5.
This means that if a player were to draw from the deck again and again,
the average of all those draws would be 5.
103
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay

• So what’s the expected value of the pot given that you’ve drawn an eight?
Your eight plus the five you’d expect the second player to draw and the
five you’d expect the third player to draw—that’s 18—all divided by
3 equals 6.
• You also wouldn’t want to bid $6. Again, imagine the other players
behaved the same way, bidding the average of their draw from the deck
and two fives. That means the player who drew a six would bid $5.33.
And the player who drew an ace would bid $3.67.
• Once again, you’d have the highest bid, so you’d win the auction. You’d
pay a price equal to the second-highest bid, which in this case is $5.33.
But the pot is still worth just $5.
• With this more sophisticated way of formulating your bid, you’re not
overpaying as much, but you’re still overpaying. That means you’re still
suffering from the winner’s curse.

AVOIDING THE WINNER’S CURSE


• This time, take into account what you learn when you find out you’ve
won the auction. When you win, you learn no one else’s card was higher
than yours. What would you expect the pot to be worth in that case?
• You know your card is an eight. If you assume the other two cards are
eights or lower, each of those other cards has an expected value of 4. It’s
a bit like they’re being drawn from a different deck—one that includes
the cards joker though eight. If you were to draw from that deck again
and again, the average of those draws would be a four.
• That means the expected value of the pot is your eight plus the two fours
you’d expect each of the other players to draw, assuming their cards are
no higher than yours. This ends up being $5.33. That’s the way you
avoid the winner’s curse.
• Just to be sure, let’s see what the second-highest bid would be when
everyone behaves this way. As before, that will be the bid submitted by
the player who draws the six.
104 104
Lesson 11 Hidden Auctions: Common Value and All-Pay

• From that player’s standpoint, the expected value of the pot if no one else
draws a card higher than a six would be the average of her own card plus
two more cards drawn from a deck with only the cards joker through six.
That’s 6 + 3 + 3, or 12, all divided by 3, which equals 4.
• So you win the auction, paying the second-highest bid—in this case, just
$4—meaning you’re finally able to avoid the winner’s curse.

ALL-PAY AUCTIONS
• In all-pay auctions, everyone who submits a bid pays what they bid,
regardless of whether they win the auction. The person who submits the
highest bid either wins with certainty or, much more commonly, has the
highest probability of wining. But everyone has to pay what they bid.
• This may sound ridiculous, but people bid in all-pay auctions all the
time. For example, in lawsuits, each party’s bid is the amount it spends
on a legal team. The side that spends more on the larger and more expert
team may not be guaranteed to win the case, but that certainly increases
its chances of winning.
• Another example is elections. In the 2016 presidential election, Hillary
Clinton’s campaign spent roughly $1.2 billion, while Donald Trump’s
spent about $600 million. The prize is the presidency, and the campaigns
bid by spending on rallies, advertisements, and organizers. Donald
Trump won the election, but Hillary Clinton’s campaign still had to
pay its $1.2 billion bid.

TRIAL BY BATTLE
• Another example of an all-pay auction comes from a paper by Peter
Leeson. His specialty is explaining why what seem to be historical
curiosities are actually rational ways for people to solve problems in the
absence of a well-functioning government. This example focuses on the
Norman England custom of trial by battle.

105
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay

• Property markets in Norman England weren’t well functioning. The


king granted land to several major lords, and these lords benefited from
the king’s protection. In exchange, the lords paid taxes to the king.
• Each of those major lords granted some of their land to several minor
lords, who enjoyed the lord’s protection in exchange for paying taxes.
The minor lords would then grant some of their land to still more minor
lords, and so on, until you had the tenants who actually worked the land.
• This feudal chain made it hard to buy or sell land. That was, in part,
because in order to sell your land, you’d need permission of the lord who
granted you the land. So people in Norman England usually transferred
land from those who didn’t derive much value from it to people who
valued it more highly using trial by battle.
• The current and prospective owners of the land didn’t fight each other
directly. Instead, they hired a champion to fight on their behalf. The
champions would fight with short clubs until one was killed—which was
rare—or until one surrendered. The belief was that God would favor
whichever party had a rightful claim to the land.

PROBABILITY OF WINNING
• Imagine you, as the tenant, own a piece of land that you value at £1.
And imagine another player, called the demandant, claims that same
land rightfully belongs to him, and he values it at £2. Assuming both
of you can convince a judge that you have a reasonable claim to owning
the land, the judge will order each of you to hire a champion to fight
on your behalf.
• Assume the probability that your champion wins the battle, which we’ll
call pT , equals the amount you spend divided by the amount you and the
demandant collectively spend:

.
• In this equation, t is the amount you spend on your champion, and d is
the amount the demandant spends on his.
106 106
Lesson 11 Hidden Auctions: Common Value and All-Pay

• Likewise, the probability that the demandant’s champion wins the battle,
which we’ll call pD, equals the amount he spends divided by the amount
you both collectively spend:

.
• In other words, a more expensive champion is more likely to win the trial
by battle. If you both hire equally expensive champions—if t = d—then
you’re both equally likely to win. But if the demandant spends twice as
much on his champion as you spend on yours, his will win two-thirds
of the time.

PAYOFFS
• Your expected payoff, which is represented as E for expected value and
πT for the tenant’s payoff, equals the probability your champion wins
times your payoff if he wins, plus the probability your champion loses
times your payoff if he loses.
• You’ve already found the probability that your champion wins, and the
probability that your champion loses is just the probability that the
demandant’s champion wins.
• If your champion wins, you get to keep your land, so your payoff is the
value you place on the land, or £1, minus the amount you paid your
champion, or t. If your champion loses, you no longer have your land,
but you still have to pay your champion, so your payoff is simply –t.
• This all gives you

• After a bit of arithmetic, this simplifies to

• Using similar logic, the demandant’s expected payoff is

.
107
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay

• Using a few lines of calculus, you can show that the demandant should
spend twice as much on his champion as you spend on yours. In total,
you spend two-thirds of a pound on champions.
• Importantly, this is less than the demandant would have spent if he had
simply bought the land. You value the land at £1, so he’d have to pay
more than that to buy it.
• Trial by battle may seem absurd, but in the absence of a property market,
this violent all-pay auction was an effective way of getting land into the
hands of the people who valued it the most.

READINGS
Leeson, “Trial by Battle.”
Thaler, “The Winner’s Curse.”

QUESTIONS
1. In the reality TV series Storage Wars, thrift store owners bid on the
contents of abandoned storage lockers. The winner of the auction then
tries to sell the contents at his or her store. Because bidders are only
allowed a quick peek at the contents before placing a bid, the winner
is often surprised by how much (or how little) the haul is worth. In
what ways is this like a common-value auction?
2 . The notion of an auction where everyone—not just the winner—pays
what he or she bids is so foreign that it may seem like the only sensible
strategy is to not participate. But that doesn’t have to be true.
Imagine you and another pharmaceutical company are fighting
over who should control the patent for a new drug. Because the
other company is larger than yours, the drug is worth more to it. In
particular, it stands to earn $18 million in profit from this new drug,
where your company would earn just $9 million.

108 108
Lesson 11 Hidden Auctions: Common Value and All-Pay

Why might it be better for you to spend $2 million on your legal team
(while the other company spends $4 million on its legal team) than for
you to not contest the case and let the other company have the patent?
Similar to the trial by battle example, you can assume that if you spend
half as much on your legal team as your opponent does, you have one
chance in three of winning.

109
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies

LESSON 12

GAMES WITH
CONTINUOUS
STRATEGIES

110 110
Lesson 12 Games with Continuous Strategies

I
n almost every game you’ve
You won’t be able to
seen up to this point, players
use a payoff matrix for
have had a fixed number
games with continuous
of possible strategies. As long
strategies. Instead, this
as the number of potential
lesson relies on just the
strategies is countable, the
most straightforward
game has discrete strategies. differential calculus. You
This lesson focuses on games can still learn from this
with continuous strategies. In lesson’s examples even if
these, players can choose from you choose not to worry
an infinite number of potential about the equations.
strategies.

GAME THEORY IN THE GULF OF ADEN


• This maritime example has to do with deciding how much to contribute
toward the provision of a public good, where a public good is one you
get to enjoy even if you don’t help pay for it. And your enjoyment of the
public good doesn’t detract from someone else’s enjoyment.
• If you want to sail to Europe from Asia or the Arabian Peninsula and
you don’t want to go all the way around Africa, there’s just one shortcut.
That’s to sail through the Gulf of Aden, the Red Sea, the Suez Canal,
and finally the Mediterranean Sea.
• But until a few years ago, there was one serious problem with that
route: the pirates based in the failed state of Somalia. As an oil tanker
or a container ship sailed through the Gulf of Aden, pirates might pull
up alongside in a speed boat, board the ship, and hold its crew and cargo
for ransom.*

* This is what happens in the movie Captain Phillips, which is based


on the true story of the hijacking of the Maersk Alabama cargo
ship.
111
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies

• In an effort to protect commercial shipping lanes, countries sent naval


ships to patrol the gulf. For simplicity, we’ll call these gunboats. The
protection the gunboats provided was a public good because once you’ve
made the gulf safe for commercial shipping, every tanker and container
ship that travels through the gulf enjoys that protection regardless of
whether its owners helped pay for the gunboats.

PAYOFFS
• There are many countries whose commercial ships travel the Gulf
of Aden, and there are several countries whose gunboats helped
patrol it. For this example, you can focus on the United States and
Great Britain.
• You can assume that because Great Britain is closer to the Gulf of Aden,
more of the products it produces and consumes travel through the gulf.
This means it derives twice the benefit the US does from the protection
gunboats provide.
• And because the United States has spent so much more over the years
on defense, you can also assume US gunboats are more technologically
sophisticated and, therefore, more effective at providing protection.
• A little more formally, the United States’ payoff from the effort it and
Great Britain devote to defending the gulf is

• On the right-hand side of the equation, a is the number of gunboats


America sends and b is the number Great Britain sends.
• We’ll allow for fractional values because a gunboat doesn’t have to be in
the gulf all the time. If the US sends one boat but it’s there for just six
months of the year, a = 1/2.

112 112
Lesson 12 Games with Continuous Strategies

• The term in parentheses is the benefit the US enjoys from the gunboats
both countries send. Note that America’s gunboats count for twice as
much as Great Britain’s because of their superior technology. The term ab
is there because there are some aspects of the job that go more smoothly
when both countries contribute—perhaps the gunboats have different
relative strengths, meaning they complement one another.
• Finally, the squared term is the cost associated with sending gunboats
to the Gulf of Aden. Perhaps the US would have had one in the region
anyway, so the cost of the first gunboat is small. But as it sends more and
more, the US is forced to divert them from increasingly critical missions
elsewhere in the world.
• Great Britain’s payoff function looks similar:

• Note, though, that the British get twice as much benefit from the
gunboats the two countries deploy. Again, that’s because Great Britain
is so much closer to the Gulf of Aden.

BEST-RESPONSE RULE
• This is a game with two players—the United States and Great Britain—
where payoffs are measured as the benefit from the protection the
gunboats provide minus the cost of deploying them.
• Each country has a continuum of strategies to choose from since each
can choose any value between zero and the total number of gunboats
in its navy. Because we’re allowing for fractional values, there are in
fact an infinite number of potential strategies for each country to
choose from.
• If you were to plot the United States’ payoff function for some given level
of effort on Great Britain’s part, you’d get a hill-shaped curve. The US
wants to be at the very top of this payoff function. The slope of its payoff
function at that maximum is 0.

113
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies

• The next step is to take the derivative of the payoff function with respect
to the variable that’s under the United States’ direct control—the number
of gunboats it deploys:

• This expression tells you the slope of the payoff function for any values
of b, which the US can’t directly control, and a, which it can.
• Because you want to find the value of a at which the slope of the payoff
function is 0, set this derivative equal to 0 and solve for a:

• Subtracting 2 + b from both sides and then dividing both sides by −2,
you get

114 114
Lesson 12 Games with Continuous Strategies

• This is what’s called a best-response rule. This rule tells you how
many gunboats the United States should deploy to the Gulf of Aden
in order to maximize its payoff given the number of gunboats Great
Britain sends.
• If, for example, Great Britain sends no gunboats, the United States
should deploy one gunboat. But if Great Britain sends four gunboats,
the United States should deploy three.
• It may seem counterintuitive that as Great Britain devotes more effort
to protecting commercial ships, the United States should devote more
effort as well. But remember, the countries’ gunboats complement one
another. With that in mind, America is willing to do more work when
it has more help.
• You can follow the same logic to find Great Britain’s best-response rule,
which is b = 1 + a.

NASH EQUILIBRIUM
• Next, you can use these two best-response rules to find the Nash
equilibrium for this game. Remember that at a Nash equilibrium, no
player can improve his own payoff by changing his own strategy. In other
words, at a Nash equilibrium, each player’s strategy is a best response to
the strategies chosen by every other player.
• You can apply that reasoning to this game by finding the values of a and
b that simultaneously satisfy both countries’ best-response rules.
• The first step will be to substitute Great Britain’s best-response rule in
for the b in the United States’ best-response rule:

• If you rearrange this equation and solve for a, you’ll find that a = 3.

115
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies

• You can then use Great Britain’s best-response rule to find the number
of gunboats it should deploy to the Gulf of Aden when the United States
sends three. That’s 1 + 3 = 4.
• This is a Nash equilibrium because, given the amount of effort Great
Britain puts in, the United States can do no better than to deploy three
gunboats. And given the amount of effort the United States puts in,
Great Britain can do no better than to deploy four gunboats.
• This same kind of analysis can be applied to any situation where two
parties have to decide how much effort to devote to a joint project,
such as roommates deciding how much time to spend cleaning their
apartment, or countries deciding how much money to spend on reducing
greenhouse gas emissions.

GAME THEORY IN BIOLOGY


• Imagine two bull elk with their antlers locked in combat. The winner
will earn the right to breed with the females in the herd, and the loser
will not pass his genes on to the next generation.
• Bull elk antlers can measure up to four feet across and weigh as much
as 40 pounds. They’re so huge because the larger a bull elk’s antlers, the
more likely he is to win the battle for breeding rights and to pass his
genetic traits on to the next generation.
• There are also costs associated with these antlers, like the energy required
to grow them and carry them around, not to mention how difficult it
would be to eat or flee from predators.
• Imagine this as a game where you and your opponent are playing the
role of a bull elk. You have to simultaneously decide how large your
antlers should be. And because you’re both choosing from a continuum
of possible sizes, this is a game with continuous strategies.

116 116
Lesson 12 Games with Continuous Strategies

PAYOFFS
• You can measure payoffs in terms of the expected benefit the antlers
provide minus the cost associated with them. If y is the size of your antlers
and m is the size of your opponent’s, you can assume the probability you
win the battle is

• If you choose not to grow antlers, you have no chance of winning.


If you and your opponent both grow antlers four feet across, you’d expect
to win half the time.
• If you win the fight, you’ll get to mate with α females, where α is a constant
related to how polygynous** the species is. The more polygynous the
species is, the larger α will be.
• If you lose the fight, you mate with no females and derive no benefit
from your antlers.
• Win or lose, you incur a cost for growing your antlers. For simplicity,
that cost can be called y.
• With this in mind, you can write your expected payoff as

• On the right-hand side, the first term is your probability of winning


times the benefit you receive if you win. The second term is the cost you
incur for growing your antlers whether you win or lose.
• Note that you can leave out your probability of losing times the benefit
you receive if you lose, because that benefit is 0.

** In this context, polygyny refers to one male mating with more than
one female.
117
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies

• Assuming α is positive, this payoff function is hill-shaped, meaning


your expected payoff is initially increasing with antler size but eventually
begins to decrease. You want to be at the top of this payoff function,
where the slope is 0.

SOLUTION
• Like before, take the derivative of your expected payoff function
with respect to the variable that’s under your control—your antler
size—and set that derivative equal to 0. Solve for y to get your best-
response rule.
• The calculus is a bit more involved in this example. Because y is the
numerator and the denominator of your probability of winning, you
have to use what’s called the quotient rule.
• You should find that your best-response rule is

.
• By identical reasoning, your opponent’s best-response rule is

.
• The Nash equilibrium in this game is when each of your strategies is
a best response to the other’s strategy. In other words, it’s the y and the
m that simultaneously satisfy both of these equations.
• You find this by plugging one of these best-response rules into the other.
After several lines of algebra, you should get

.
• Your antlers should be as big as your opponent’s, and your opponent’s
antlers should be as big as yours. And the size of both of your antlers
depends on the polygyny constant. The more polygynous the species,
the more you’re willing to invest in antler size.

118 118
Lesson 12 Games with Continuous Strategies

READINGS
Basu, “The Traveler’s Dilemma.”
Frank, The Darwin Economy.
Pool, “Putting Game Theory to the Test.”

QUESTIONS
Grouper and octopuses, two carnivorous sea creatures, often hunt
together in coral reefs. The octopus reaches deep into the coral with
its tentacles to catch fish. Sometimes it succeeds, but more often, fish
swim out of the coral and into the waiting mouth of the grouper.
This isn’t exactly cooperation, since the octopus doesn’t benefit from
the grouper lurking nearby. With this in mind, the octopus’s payoff
function is
,
where is the number of hours the octopus spends hunting each day,
the term in parentheses is its benefit from hunting, and the squared
term is its cost of hunting.
Since the grouper does benefit from the octopus’s hunting effort, the
grouper’s payoff function is more complicated:
,
where is the number of hours the grouper spends hunting each day,
the term in parentheses is its benefit from hunting, and the squared
term is its cost of hunting.
1. Find the Nash equilibrium for this game.
2 . Calculate each sea creature’s payoff at the Nash equilibrium.

119
Understanding Economics: Game Theory Bibliography

BIBLIOGRAPHY
“£66,885 Split or Steal?” YouTube video, https://www.youtube.com/
watch?v=yM38mRHY150. This four-minute video clip from the British
gameshow Golden Balls is a delightful real-life example of a prisoner’s
dilemma–style game with $100,000 at stake.
“The Division Problem.” Planet Money. Podcast audio, January 25, 2019.
https://www.npr.org/transcripts/688849249. NPR’s Planet Money team
talks about how auction-like markets can be used to divide rent between
roommates, pick which movie friends should see together, and better
allocate boat slips in Santa Barbara Harbor.
“What’s Left When You’re Right?” Radiolab. Podcast audio. September 5,
2019. https://www.wnycstudios.org/podcasts/radiolab/episodes/whats-
left-when-youre-right. This hour-long podcast dives deeper into the
British gameshow featuring real-life high-stakes prisoner’s dilemma
games. It focuses on one contestant’s attempt to twist the rules of the
game to his benefit.
Akerlof, George. “The Market for ‘Lemons’: Quality Uncertainty
and the Market Mechanism.” The Quarterly Journal of Economics 84,
no. 3 (1970): 488–500. Akerlof won a Nobel prize for showing how
asymmetric information can destroy the market for good products,
leaving only a market for lemons. This is true not only for used cars
but for insurance, loans, and job candidates.
Andersen, Steffen, Seda Ertaç, Uri Gneezy, Moshe Hoffman, and
John List. “Stakes Matter in Ultimatum Games.” American Economic
Review 101, no. 7 (2011): 3427–3439. In most laboratory experiments,
participants playing the ultimatum game are both more spiteful and more
generous than game theory would predict. The authors of this article
ask villagers in northeast India to divide a pot worth more than half of
the typical villager’s annual income. They find that when the stakes
are that high, people’s behavior is much more consistent with theory.

120 120
Bibliography

Axelrod, Robert. The Evolution of Cooperation. New York: Basic Books,


1984. Axelrod, a University of Michigan political scientist, wrote this
book based on his now-famous prisoner’s dilemma tournaments. In
addition to presenting the results of those tournaments and the winning
strategies, he discusses how cooperation can emerge during wartime and
in biology and what you can do to promote cooperation.
Basu, Kaushik. “The Traveler’s Dilemma: Paradoxes of Rationality in
Game Theory.” American Economic Review 84, no. 2 (1994): 391–395.
If it’s possible for an article in an economics journal to be adorable,
this article is. Basu begins by presenting a clever but straightforward
continuous-strategy game. Equally straightforward game theoretic
analysis leads to a surprisingly counterintuitive equilibrium. Basu goes
on to present three possibilities for reconciling theory with the way he
believes people would behave in practice.
Caplan, Bryan. “Bryan Caplan—The Case Against Education.”
February 22, 2018. YouTube video, www.youtube.com/
watch?v=Pa1aMLB0uno. Employers can’t observe how smart and
hardworking a potential employee is, but they can easily observe
a potential employee’s educational credentials. Economists have long
recognized that obtaining degrees is a way for job candidates to signal
how qualified they are. In this hourlong interview, Caplan argues
signaling is responsible for perhaps 80% of the college wage premium.
Case, Nicky. “The Evolution of Trust.” July 2017. https://ncase.me/trust/.
This is a mesmerizing visualization of a repeated prisoner’s dilemma
tournament that lets you pit tit-for-tat (or what the visualization calls
copycat) against a series of other strategies to see which prevails in head-
to-head contests, group contests, and an evolutionary model.
Dugar, Subhasish. “Nonmonetary Sanctions and Rewards in an
Experimental Coordination Game.” Journal of Economic Behavior
& Organization 73, no. 3 (2010): 377–386. This article presents the
results of a laboratory experiment where participants play a seven-
strategy version of the stag hunt game. Dugar enriches the game by
allowing participants to send signals to others indicating they either
121
Understanding Economics: Game Theory Bibliography

approve or disapprove of the other players’ behavior in previous rounds.


While statements of approval don’t stop players from gravitating toward
the lowest-payoff Nash equilibrium, statements of disapproval cause
players to move toward the highest-payoff equilibrium.
Frank, Robert. The Darwin Economy: Liberty, Competition, and the
Common Good. Princeton: Princeton University Press, 2011. Frank
argues that humans, like bull elk or peacocks, are locked in a continuous-
strategy version of the prisoner’s dilemma where each of us has an
incentive to outspend our rivals on things that make us no better off
from an objective standpoint but merely make us appear more successful
than our peers.
Kahneman, Daniel, Jack Knetsch, and Richard Thaler. “Fairness and
the Assumptions of Economics.” The Journal of Business 59, no. 4 (1986):
S285–S300. In this wide-ranging paper written for a general audience,
the authors use the ultimatum game as a springboard for discussing
which business practices the public deems fair and which they do not.
Kingston, Christopher G. and Robert E. Wright. “The Deadliest of
Games: The Institution of Dueling.” Southern Economic Journal 76, no.
4 (2010): 1094–1106. This article is the source material for the dueling
example from lesson 2. The first half presents a history of dueling and
an accessible presentation of dueling as a sequential-move game. The
mathematical model in the second half of the article is more challenging
but will be of interest to the mathematically inclined.
Leeson, Peter. “Oracles.” Rationality and Society 26, no. 2 (2014):
141–169. The game of chicken has two well-known pure-strategy Nash
equilibria, but it also has a mixed-strategy Nash equilibrium where both
players randomize between driving straight and swerving. Naturally, this
sometimes leads to players crashing into one another. Leeson argues that
turning to oracles, such as reading tea leaves or poisoning chickens, may
be one way to coordinate on one of the pure-strategy Nash equilibria,
avoiding the possibility of the worst-case scenario where both players
stand firm.

122 122
Bibliography

———. “Ordeals.” The Journal of Law & Economics 55, no. 3 (2012):


691–714. For 400 years, the Catholic Church used ordeals to determine
guilt or innocence. Whatever your beliefs, this practice seems bizarre by
today’s standards. Leeson argues that ordeals had the potential to create
a tidy separating equilibrium where the guilty would confess their crimes
and only the innocent would undergo an ordeal (which the priest rigged
to protect the innocent from harm).
———. “Trading with Bandits.” The Journal of Law & Economics 50,
no. 2 (2007): 303–321. Leeson uses backward induction to show
how villagers in precolonial west central Africa were able to interact
peacefully and profitably with roaming traders who would rather pillage
than bargain. The villagers did this by introducing credit markets and
demanding tribute payments from traders.
———. “Trial by Battle.” Journal of Legal Analysis 3, no. 1 (2011): 341–
375. Norman England’s feudal chain of lord-tenant relationships made
it nearly impossible to buy or sell land. Then how could property change
hands? By hiring champions to fight on behalf of the person who owned
the land and the person who wanted it. Whoever spent the most on their
champion was more likely to win this trial by battle, so land tended to
end up in the hands of those who valued it most.
Lucking-Reiley, David. “Using Field Experiments to Test Equivalence
between Auction Formats: Magic on the Internet.” American Economic
Review 89, no. 5 (1999): 1063–1080. The English auction and the
second-price sealed-bid auction are strategically equivalent, meaning
participants should follow the same bidding strategy in either auction.
The same is true for the Dutch and second-price sealed-bid auctions.
Lucking-Reiley tests whether this holds in practice by selling thousands
of dollars’ worth of trading cards in online auctions.
McAfee, R. Preston, John McMillan, and Simon Wilkie. “The Greatest
Auction in History.” In Better Living through Economics, edited by
John Siegfried, 168–187. Cambridge: Harvard University Press, 2012.
Siegfried set out to publish a collection of essays on policies that were
inspired by academic economists and that virtually all Washington
123
Understanding Economics: Game Theory Bibliography

lawmakers today would agree were a success. That’s a short list, but the
FCC’s radio-spectrum auctions are on it. The authors of this chapter
argue that replacing spectrum lotteries with auctions struck the right
balance between bringing in revenue for the government and putting
spectrum rights into the hands of companies prepared to do the most
with this scarce resource.
McKelvey, Richard and Thomas Palfrey. “An Experimental Study of
the Centipede Game.” Econometrica 60, no. 4 (1992): 803–836. Game
theorists talked about the centipede game for years before McKelvey
and Palfrey decided to have people play the game in the laboratory. In
addition to presenting the results of their experiment, the authors put
forward an explanation for why people’s behavior isn’t consistent with
backward-induction thinking. The second half of the paper is quite
dense, but the first half is accessible.
Nasar, Sylvia. A Beautiful Mind. New York: Simon & Schuster, 1998.
Sylvia Nasar’s biography of John Nash, the game theory pioneer and
namesake of the Nash equilibrium, is a pleasure to read and is the basis of
the 2001 film starring Russell Crowe. While the book focuses primarily
on Nash’s life, it does include interesting insights into game theory, such
as the story behind the creation of the prisoner’s dilemma game.
Palacios-Huerta, Ignacio and Oscar Volij. “Field Centipedes.” American
Economic Review 99, no. 4 (2009): 1619–1635. These authors build on
McKelvey and Palfrey’s work by proposing another explanation for why
laboratory participants don’t behave how theory predicts when playing
the centipede game: lack of common knowledge of rationality. If one
player isn’t sure another will employ backward-induction thinking,
it’s not in his best interest to stop the game immediately. The authors
test their theory by playing the game with college undergraduates and
chess grandmasters.
Palacios-Huerta, Ignacio. “Professionals Play Minimax.” The Review
of Economic Studies 70, no. 2 (2003): 395–415. In another application
of mixed-strategy Nash equilibrium to professional sports, this article
focuses on penalty kicks in Europe’s top professional soccer leagues.
124 124
Bibliography

The author finds that players kick to the goalkeeper’s left or right,


and the goalkeeper dives to his left or right, with almost exactly the
probabilities theory predicts.
Pool, R. “Putting Game Theory to the Test.” Science 267, no. 5204
(1995): 1591–1593. You can’t say it any better than the author does in
this short article’s subheading: “Animal behavior—from aggression in
mole rats to cooperation among guppies—is providing field tests of
this tool for understanding games of all kinds, from poker to politics.”
Reiley, David, Michael Urbancic, and Mark Walker. “Stripped-Down
Poker: A Classroom Game with Signaling and Bluffing.” The Journal
of Economic Education 39, no. 4 (2008): 323–341. Poker is an excellent
laboratory for studying strategic thinking in the face of uncertainty. This
article reduces poker to its simplest form and shows how this stripped-
down version of the game can be used to teach lessons about signaling,
mixed strategies, and probability.
Rousu, Matthew. “Run or Pass? A Football Play-Calling Experiment
to Illustrate the Mixed Strategy Nash Equilibrium.” Journal of Business
Education no. 9 (2008): 79–89. This article is the basis for the run or
pass game from lesson 5. In addition to providing insight into how game
theory can be applied to sports, it also gives a glimpse at how instructors
use games like this to teach game theory.
Schelling, Thomas. The Strategy of Conflict. Harvard University Press,
1960. In one of the field’s true classics, Schelling, a Nobel prize winner
and the basis for the Dr. Strangelove character, covers a wide range of
game theory topics. His discussion of tacit bargaining in chapter 3 is
most relevant to the coordination games covered in lesson 4.
Selten, Reinhard. “The Chain Store Paradox.” Theory and Decision
9 (1978): 127–159. A chain store with a monopoly in each of 20 cities
faces a different potential competitor in each city. Common sense
suggests the chain store should respond aggressively the first time
a competitor challenges one of its local monopolies. Slashing prices

125
Understanding Economics: Game Theory Bibliography

would lead to deep losses in that city but would allow the chain
store to maintain its monopoly in the other 19. Backward induction
suggests otherwise.
Smith, J. M. and G. R. Price. “The Logic of Animal Conflict.” Nature
246 (1973): 15–18. In this admirably concise paper, the authors present
a five-strategy version of the hawk-dove game from lesson 3, showing
that natural selection can lead to limited war, where animals rarely
seriously injure members of their own species.
Sun, Albert. “Divide Your Rent Fairly.” The New York Times, April 28,
2014. https://www.nytimes.com/interactive/2014/science/rent-division-
calculator.html. Are you having trouble deciding how to divide up the
rent between roommates? The New York Times is here to help with
this handy online calculator. But be careful: You get different results
depending on who goes first.
Suri, Jeremi. “The Nukes of October: Richard Nixon’s Secret Plan to
Bring Peace to Vietnam.” Wired, October 25, 2008. A recurring theme
in lesson 7 is that you sometimes benefit from appearing less rational
than you truly are. Did Richard Nixon send a squadron of nuclear-
armed B-52s racing toward Moscow because he wanted the Soviets to
think he was impulsive and volatile? Or did he do it because he was
impulsive and volatile?
Thaler, Richard. “The Ultimatum Game.” Chap. 3 in The Winner’s
Curse: Paradoxes and Anomalies of Economic Life. Princeton: Princeton
University Press, 1994. Thaler’s wonderfully readable survey of
behavioral economics includes chapters on more than a dozen topics
at the intersection of economics and psychology. This chapter focuses
on the ultimatum game, including the classic version covered in lesson
6 and more complicated variations.

126 126
Bibliography

———. “The Winner’s Curse.” Chap. 5 in The Winner’s Curse: Paradoxes
and Anomalies of Economic Life. Princeton: Princeton University Press,
1994. In this easily accessible chapter rich with real-world examples,
Thaler surveys the empirical literatures on common-value auctions and
the winner’s curse.
Van Huyck, John, Raymond Battalio, and Richard Beil. “Tacit
Coordination Games, Strategic Uncertainty, and Coordination Failure.”
American Economic Review 80, no. 1 (1990): 234–248. In this paper from
the top economics journal, the authors describe a laboratory experiment
involving a seven-strategy version of the stag hunt game. Over time,
participants are more and more likely to end up at the risk-dominant
equilibrium where everyone makes the minimum possible contribution
to a group fund, not the payoff-dominant equilibrium where everyone
behaves generously. This leads to the average participant’s payoff being
less than half of what it could be.
Walker, Mark and John Wooders. “Minimax Play at Wimbledon.”
American Economic Review 91, no. 5 (2001): 1521–1538. Studying
a concept like the mixed-strategy Nash equilibrium is only worthwhile
if it actually describes the way people behave. In this article, the authors
find that professional tennis players in top tournaments serve to their
opponent’s forehand or backhand with the probabilities game theory
predicts they should.

127
Understanding Economics: Game Theory Answers

ANSWERS
LESSON 1
1.
Colin
Left Right

Up 0, 3 10, 10
Rose
Down 2, 1 5, 0

2 .
Colin
Left Right

Up 2, 2 6, 1
Rose
Down 1, 6 5, 5

The defining characteristic of a prisoner’s dilemma is that both players


earn a lower payoff when they both play their dominant strategy than
they would have if they’d both played their other strategy. In this
example, the Nash equilibrium is for Rose to play up and Colin to play
left. Both players earn a lower payoff at the associated outcome than
they would have if Rose had played down and Colin played right.

LESSON 2
1. If Rose is playing tit-for-tat, that means she cooperates in the first round,
and then in all subsequent rounds copies whatever Colin did in the
previous round. If Colin always defects, then in the first round, Rose
will begin by cooperating and Colin will defect. Rose earns a payoff of
0 in the first round, and Colin earns a payoff of 3. In all subsequent
rounds, Rose will copy Colin’s strategy from the previous round (defect)
and Colin will defect. Each will earn a payoff of 1.

128 128
Answers

2 . If both Rose and Colin play tit-for-tat, both will cooperate in the first
round. In subsequent rounds, because each copies the other’s strategy
from the previous round, both players will continue to cooperate.
That means each player earns a payoff of 2 in the first round and in
subsequent rounds.

LESSON 3
1. False. Driving straight is your best response if you think your opponent
will swerve, but it’s not your best response if you think your opponent
will drive straight. In chicken, there is no one strategy that is always
your best response regardless of what you think your opponent will do.
That means you don’t have a dominant strategy.
2 . There is more than one correct answer to this question. What’s important
is that Rose prefers swerving to driving straight when she thinks Colin
will drive straight, and she prefers driving straight to swerving when she
thinks Colin will swerve. So Rose’s payoff in the upper left cell should
be less than 0, and her payoff in the lower right cell should be less than
2. By similar reasoning, Colin’s payoff in the upper left cell should be
less than 0, and his payoff in the lower right cell should be less than 2.

Colin
Straight Swerve

Straight − 1, − 1 2, 0
Rose
Swerve 0, 2 −1, 1

LESSON 4
1. This is an assurance game. That’s a coordination game with two pure-
strategy Nash equilibria, where both players clearly prefer the payoff at
one of those equilibria to the payoff at the other. In this case, the Nash
equilibrium where both Rose and Colin play blue offers both players
a higher payoff than the Nash equilibrium where they both play red.

129
Understanding Economics: Game Theory Answers

2 . In the classic version of the stag hunt, you can catch the hare whether
or not the other player hunts the hare, but you can only catch the stag
if both of you hunt the stag. That means your payoff from hunting the
hare (or, in the current example, staying in and watching TV) is the
same regardless of what the other player chooses to do.
This game has two pure-strategy Nash equilibria. The Nash equilibrium
where both Rose and Colin watch TV risk dominates the Nash
equilibrium where both Rose and Colin go out for dinner and a movie,
because staying in is less risky for each player.

Colin

Dinner and
Watch TV
a movie

Watch TV 1, 1 1, 0
Rose
Dinner and a movie 0, 1 3, 3

LESSON 5
1.

Colin

Straight (q) Swerve (1 − q)

Straight (p) −1, −1 2, 0


Rose
Swerve (1 − p) 0, 2 1, 1

At the mixed-strategy Nash equilibrium, each player chooses the


probability with which they play each of their pure strategies such
that their opponent is indifferent between playing each of theirs. In
this example, that means Rose chooses p so that Colin’s expected
payoff from driving straight equals his expected payoff from swerving.
More formally,

130 130
Answers

()*%+
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
()*%+
()*%+
()*%+ $=
!"#$%&'"$$$==−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(12(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
𝐸𝐸"𝜋𝜋 !"#$%&'"
!"#$%&'"
()*%+
!"#$%&'" = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
𝐸𝐸"𝜋𝜋!"#$%&'" $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝) ,

()*%+
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
()*%+
()*%+
$=
!,-#.-
()*%+ $$==0(𝑝𝑝)
0(𝑝𝑝)
0(𝑝𝑝) ++ +1(1
1(11(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝).
𝐸𝐸"𝜋𝜋 ()*%+ $ = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
!,-#.-
!,-#.-
!,-#.-
𝐸𝐸"𝜋𝜋!,-#.- $ = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
Setting these two expected payoffs equal to one another and solving
for p,
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(12(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)=
= =0(𝑝𝑝)
0(𝑝𝑝)
0(𝑝𝑝) ++ +1(1
1(1 1(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
223𝑝𝑝
− 3𝑝𝑝 1=− 1𝑝𝑝−𝑝𝑝𝑝𝑝
2− − 3𝑝𝑝 == 1
1−
2− 3𝑝𝑝 = − 𝑝𝑝
2 − 3𝑝𝑝 = 1 − 𝑝𝑝

=𝑝𝑝=
𝑝𝑝 𝑝𝑝 1/2..
=1/2.
1/2.
𝑝𝑝 = 1/2.
You could 𝑝𝑝 = 1/2.
use similar
5$""-# analysis to show that Rose is indifferent between
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
5$""-#
her 5$""-#
two pure $ =$ $=
strategies =1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
when−q− −
1(1 1(1
= 1(1 − −𝑝𝑝)
− That
1/2. 𝑝𝑝) 𝑝𝑝)
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45
5$""-#
/-$01 3)# 45 $ = 1(𝑝𝑝) − 1(1 − 𝑝𝑝)means the mixed-strategy
𝐸𝐸"𝜋𝜋
Nash equilibrium $is=
/-$01 3)# 45 1(𝑝𝑝)Rose
where − 1(1
drives−straight
𝑝𝑝) with probability 1/2
and swerves with probability 1/2, while Colin also drives straight with
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
5$""-#
probability
5$""-# $$$=
1/2$and
/-$01 3)# (6 = =−1(𝑝𝑝)
−1(𝑝𝑝)
swerves
−1(𝑝𝑝) ++
with + 2(1
2(1−− −𝑝𝑝)
probability
2(1 𝑝𝑝) 𝑝𝑝)
1/2.
𝐸𝐸"𝜋𝜋 /-$01 3)# (6
/-$01 3)# (6
5$""-#
/-$01 3)# (6 = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
𝐸𝐸"𝜋𝜋
2 . /-$01 3)# (6
$ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) −− −1(1
1(1 1(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)=
= =−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(1 2(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
1(𝑝𝑝) − 1(1 − 𝑝𝑝) = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝) − 1(1 − 𝑝𝑝) = −1(𝑝𝑝) + 2(1 − 𝑝𝑝) Batter
−1−1 −1
++ +2𝑝𝑝 2𝑝𝑝
== 2=−
223𝑝𝑝
− 3𝑝𝑝
−1 2𝑝𝑝
+ 2𝑝𝑝 = 2−− 3𝑝𝑝
3𝑝𝑝 Ready for
−1 + 2𝑝𝑝 = 2 − 3𝑝𝑝 Ready for
=𝑝𝑝= =3/53/5 a changeup
𝑝𝑝 𝑝𝑝
𝑝𝑝 3/5
= 3/5 a fastball (q)
(1 − q)
𝑝𝑝 = 3/5
Thro a
−1, 1 1, −1
fastball (p)
8%"9'-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 Pitcher $$==−1(𝑞𝑞)
8%"9'-#
8%"9'-#
8%"9'-# $=
7'#), $ 45 −1(𝑞𝑞)
−1(𝑞𝑞) ++ +1(1
1(1 1(1
−− −𝑞𝑞)
𝑞𝑞) 𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45
7'#), $ 45
8%"9'-#
7'#), $ 45 $= −1(𝑞𝑞)
Throw + 1(1 − 𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45 $= −1(𝑞𝑞)
a changeup + 1(1 − 𝑞𝑞)
1, −1 −2, 2
(1 − p)
5$""-#
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
5$""-#
5$""-#
5$""-# )=
7'#), $ (6 )))= =1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) −− −2(1
2(1 2(1−−𝑞𝑞)
−can
𝑞𝑞) 𝑞𝑞)
𝐸𝐸(𝜋𝜋 7'#), $ (6
7'#), $ (6
𝐸𝐸(𝜋𝜋
Using7'#), $ (6
best-response
5$""-# = 1(𝑞𝑞) −
analysis, 2(1
you − 𝑞𝑞)
see there’s not a cell where
7'#), $ (6 ) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) ++ +1(1
both payoffs
1(1 1(1
−−
are−
𝑞𝑞)𝑞𝑞)𝑞𝑞)=
= =1(𝑞𝑞)
underlined. 1(𝑞𝑞)
1(𝑞𝑞) −2(1
2(1
Therefore,
−− 2(1 −− −𝑞𝑞)
𝑞𝑞) no pure-strategy Nash
there’s
𝑞𝑞)
−1(𝑞𝑞) + 1(1
equilibrium. −
There 𝑞𝑞)
is, = 1(𝑞𝑞)
however, −
a 2(1 − 𝑞𝑞) Nash equilibrium. The
mixed-strategy
−1(𝑞𝑞) + 1(1 − 𝑞𝑞) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞)
112𝑞𝑞
−p2𝑞𝑞 =−2 −2 + 3𝑞𝑞 indifferent between being ready for
1−− 2𝑞𝑞
pitcher 1
chooses
− =to= leave
−2 + +
the 3𝑞𝑞
batter
3𝑞𝑞
2𝑞𝑞 = −2 + 3𝑞𝑞
1 −being
a fastball and 2𝑞𝑞 =ready−2for+ a3𝑞𝑞
changeup:
=𝑞𝑞=
𝑞𝑞 𝑞𝑞
𝑞𝑞 =
=3/5
3/5 3/5
3/5 131
𝑞𝑞 = 3/5
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++2(1
2(1 −−−𝑝𝑝) 𝑝𝑝) ===0(𝑝𝑝)
0(𝑝𝑝) +++1(1
1(1 −−𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝) ++ 2(1
2(1 −− 𝑝𝑝) 𝑝𝑝)
== 0(𝑝𝑝)
0(𝑝𝑝) + 1(1
1(1 −− 𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) 0(𝑝𝑝) + 1(1 − 𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) +
+ 2(1
2 2
2(1− −
3𝑝𝑝
− 𝑝𝑝)
3𝑝𝑝 =
𝑝𝑝) =1 10(𝑝𝑝)
− − 𝑝𝑝
0(𝑝𝑝) 𝑝𝑝+
+ 1(1
1(1 −
− 𝑝𝑝)
22 −
−− 3𝑝𝑝
3𝑝𝑝 == 1
1 −
−− 𝑝𝑝
𝑝𝑝 Game
222− 3𝑝𝑝 3𝑝𝑝 = =111− −𝑝𝑝
𝑝𝑝
Understanding
22 −𝑝𝑝−
− 𝑝𝑝=3𝑝𝑝=
Economics:
3𝑝𝑝 3𝑝𝑝 =
1/2.
= =11
1/2.−− 𝑝𝑝 𝑝𝑝
𝑝𝑝 Theory Answers

5$""-# 𝑝𝑝
𝑝𝑝 == = 1/2.
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 5$""-# 𝑝𝑝 𝑝𝑝
/-$01 3)# 45 $=
𝑝𝑝 𝑝𝑝=$=
𝑝𝑝 =1/2.
1/2.
1/2.
1/2.
=1(𝑝𝑝)
1(𝑝𝑝)−−1(1
1/2.
1/2. 1(1−−𝑝𝑝)𝑝𝑝)
/-$01 3)# 45
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 5$""-#
5$""-# $===1(𝑝𝑝)
1(𝑝𝑝) −−− 1(1 −−− 𝑝𝑝)
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
5$""-#
/-$01 3)# 45
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# 45
5$""-# $$ $$$= =
=
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) − −

1(1
1(1
1(1
1(1
1(1− −

𝑝𝑝)
𝑝𝑝)𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45 $ = 1(𝑝𝑝) − 1(1 − 𝑝𝑝)
5$""-#
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋/-$01 3)# (6
/-$01 3)# (6 $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)𝑝𝑝).
$ = −1(𝑝𝑝) + 2(1 −
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋 5$""-#
these two$ $expected
=−1(𝑝𝑝)
$$= −1(𝑝𝑝) ++2(1
2(1 −to𝑝𝑝)
− 𝑝𝑝)
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# (6
Setting
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# (6
1(𝑝𝑝) 5$""-#
/-$01 3)# (6

5$""-#
𝐸𝐸"𝜋𝜋 1(1 − $𝑝𝑝) $===
=
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++
payoffs +
+
2(1
2(1
equal
2(1
2(1 −−


𝑝𝑝)
𝑝𝑝)one
𝑝𝑝)
𝑝𝑝)
another and solving
𝐸𝐸"𝜋𝜋
1(𝑝𝑝) /-$01 3)# (6
− 1(1 − 𝑝𝑝)
/-$01 3)# (6
p,/-$01 3)# (6
for /-$01 3)# (6 $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) −−− 1(1
1(1
1(1−−− 𝑝𝑝)
𝑝𝑝) 𝑝𝑝) = =−1(𝑝𝑝)
−1(𝑝𝑝) ++ 2(1
2(1 −−− 𝑝𝑝)
𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)

− −
−1(1
1(1
−1
1(1
1(1

−1
− −
+
−𝑝𝑝) 𝑝𝑝)
+2𝑝𝑝
𝑝𝑝) 𝑝𝑝) =
= 2𝑝𝑝=
=
= =−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝)
2
−1(𝑝𝑝)2−−3𝑝𝑝
−1(𝑝𝑝) +
+
+
+
3𝑝𝑝
+
2(1
2(1
2(1
2(1
2(1

− −
−𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝑝𝑝)

−1
−1 −1 +
++ 2𝑝𝑝
2𝑝𝑝 =
==
2𝑝𝑝 2
22 −
−− 3𝑝𝑝
3𝑝𝑝3𝑝𝑝
−1 −1+
−1 +2𝑝𝑝 = 2 2−−3𝑝𝑝
−1 +𝑝𝑝+2𝑝𝑝𝑝𝑝= 2𝑝𝑝
=
2𝑝𝑝
= =22
3/5
3/5
= −− 3𝑝𝑝3𝑝𝑝
3𝑝𝑝
𝑝𝑝
𝑝𝑝 𝑝𝑝 == = 3/5
3/5
3/5
𝑝𝑝 𝑝𝑝= =3/5
.
𝑝𝑝 𝑝𝑝 == 3/5
3/5
3/5
The batter chooses q to leave the pitcher indifferent between throwing
8%"9'-#
a𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
fastball $ $==−1(𝑞𝑞)
and throwing
7'#), $ 45
7'#), $ 45 a−1(𝑞𝑞)++1(1
changeup: 1(1−−𝑞𝑞)𝑞𝑞)
8%"9'-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
8%"9'-# $ ==−1(𝑞𝑞) −1(𝑞𝑞) +1(11(1 −𝑞𝑞) 𝑞𝑞)
𝐸𝐸"𝜋𝜋 8%"9'-# $ $ $$$= −1(𝑞𝑞) +
++ 1(1 −
−− 𝑞𝑞)𝑞𝑞)
8%"9'-#
7'#), $ 45
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
7'#), $ 45
7'#), $ 45
8%"9'-#
𝐸𝐸"𝜋𝜋 == =−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) +
+1(1
1(1
1(1 −
− 𝑞𝑞)
𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45
7'#), $ 45
7'#), $ 45
7'#), $ 45 $ = −1(𝑞𝑞) + 1(1 − 𝑞𝑞)
5$""-#
𝐸𝐸(𝜋𝜋5$""-#
𝐸𝐸(𝜋𝜋 7'#), $ (6
7'#), $ (6 ) = 1(𝑞𝑞) −
) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞). 2(1 − 𝑞𝑞)
5$""-#
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
Setting
𝐸𝐸(𝜋𝜋
5$""-#
5$""-#
5$""-#
7'#), $ (6
5$""-#
these two− )))expected
=
=
==1(𝑞𝑞)
))𝑞𝑞)=
1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) −−2(1
−−
2(1
2(1
payoffs
1(𝑞𝑞) − 2(1 −𝑞𝑞)

−−
equal
2(1 −
𝑞𝑞)
𝑞𝑞)𝑞𝑞)
to
𝑞𝑞)one another and solving
−1(𝑞𝑞)
−1(𝑞𝑞)
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋 + 1(1
7'#), $ (6
+5$""-#
7'#), $ (6
5$""-#
1(1
7'#), $ (6 −
7'#), $ (6
7'#), $ (6 )𝑞𝑞)=) = = 1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) − − −
2(12(12(1
− − −
𝑞𝑞)𝑞𝑞)𝑞𝑞)
for q, 7'#), $ (6
−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) +++1(1
1(1
1(1 −−𝑞𝑞)
− 𝑞𝑞) ===
𝑞𝑞)= 1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞)−−−2(1
2(1
2(1−−− 𝑞𝑞)
𝑞𝑞)𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞)+
−1(𝑞𝑞) +1(1 − 𝑞𝑞)= 1(𝑞𝑞) −3𝑞𝑞
−2(12(1− −𝑞𝑞)
−1(𝑞𝑞) ++ 1 1(1
1−−
1(1
1(1 2𝑞𝑞−
− − 𝑞𝑞)𝑞𝑞)
2𝑞𝑞 =
=−2
𝑞𝑞)=−2 1(𝑞𝑞)
++3𝑞𝑞
1(𝑞𝑞)
1(𝑞𝑞) −− 2(12(1−− 𝑞𝑞)𝑞𝑞)
𝑞𝑞)
11 −
−− 2𝑞𝑞
2𝑞𝑞2𝑞𝑞 = = =−2−2 +++ 3𝑞𝑞
3𝑞𝑞3𝑞𝑞
111− −2𝑞𝑞 == −2−2
−2+ +3𝑞𝑞

11 −− 𝑞𝑞2𝑞𝑞
2𝑞𝑞2𝑞𝑞 ==
𝑞𝑞= = −2
3/5
3/5
−2 ++ 3𝑞𝑞3𝑞𝑞
3𝑞𝑞
𝑞𝑞
𝑞𝑞 == = 3/5
3/5
𝑞𝑞 𝑞𝑞𝑞𝑞= =3/5
3/5.
𝑞𝑞 𝑞𝑞 == 3/5
3/5
3/5
So the mixed-strategy Nash equilibrium is where the pitcher throws a
fastball with probability 3/5 and a changeup with probability 2/5, while
the catcher is ready for a fastball with probability 3/5 and a changeup
with probability 2/5.

132 132
Answers

LESSON 6
1.

Game tree without line-item veto

Using backward induction, start by thinking about what the president


will do at each of his three decision nodes. Once you know that, take
a step backward to consider what Congress will do. The equilibrium,
which is stated in terms of strategies rather than payoffs, is for Congress
to either pass A or do nothing, since it’s indifferent between the outcome
associated with either strategy, and for the president to veto bill A, sign
bill B, and sign a bill that combines A and B. The two equilibrium
outcomes are circled. Both are the status quo.

133
Understanding Economics: Game Theory Answers

Game tree with line-item veto

The equilibrium here is for Congress to either pass A or do nothing, since


it’s once again indifferent between the outcome associated with either
strategy, and for the president to veto bill A, sign bill B, and to sign into
law only bill B if sent a bill that combines A and B. The two equilibrium
outcomes are circled. Both are the status quo. The line-item veto doesn’t
change the outcome of the game in this example since Congress would
never voluntarily send the president a bill that combines A and B.

134 134
Answers

LESSON 7
1. Using backward induction, start by considering your opponent’s decision
in round 6. Here, he stops the game and collects $12.80 rather than let
the game continue. This is no different than what happens in round 6
of the classic version of the game. What is different is your choice in
round 5. Because you only care about your combined payoff, you choose
to let the game continue. Recognizing that, your opponent lets the game
continue in round 4. Using similar logic, you both choose to let the
game continue in all preceding rounds. At the equilibrium, you always
let the game continue, and your opponent lets the game continue in
rounds 2 and 4 but stops the game in round 6. The equilibrium outcome
is circled.

135
Understanding Economics: Game Theory Answers

2 . Find the equilibrium using backward induction. If the incumbent finds


that a competitor has entered the market, the incumbent responds
aggressively, since 3 is greater than 2. Recognizing this, the potential
competitor decides to stay out of the market, since 0 is greater than −1.
The equilibrium is where the potential competitor stays out and the
incumbent responds aggressively should he get the chance.
The equilibrium outcome is circled.

LESSON 8
1. False, for two reasons. First, you can’t determine what you should do
at each of your decision nodes because they’re both contained within
the same information set. In other words, if your opponent bets,
you don’t know whether nature dealt her a king or a queen. Second,
nature behaves randomly, without regard for players’ payoffs, so there’s
no way to use backward induction to determine which card nature will
deal her.

136 136
Answers

2 . False. It wouldn’t be wise for the potential buyer to offer something in


the range of $650. If she were, for example, to make a $600 take-it-or-
leave-it offer, she could be sure the owner of a plum would turn it down,
and she could be just as sure the owner of a lemon would accept it. That
means there’s a 50% chance of buying nothing and a 50% chance of
dramatically overpaying for a lemon. A better option would be to offer
something between $100 and $200. She’d still have no chance of buying
a plum, but at least she wouldn’t have to worry about overpaying for
a lemon.

LESSON 9
1. This one is complicated. It’s true that a more dangerous duel is more
likely to motivate borrowers to repay their loans when their project is
successful. However, the duel can’t be too costly, or lenders won’t be
willing to make loans in the first place for fear they won’t be paid back
and will be forced to challenge the borrower to a duel to preserve their
honor. Dueling has to be just dangerous enough: not so dangerous that
it discourages loans (or, for that matter, challenges), but not so safe
that borrowers feel like they can get away with not repaying what they
owe. In sum, you can’t say that a more dangerous duel will always be
more effective.
2 . In the airline example, you saw that airlines with coach and first-class
cabins have to be careful not to make coach too nice. If they do, business
travelers will start opting for coach. By the same logic, automakers may
want to save some sought-after features for their more expensive trim
packages, even though that makes the manufacturing process more
complicated and expensive. That’s because automakers, like airlines,
are trying to discriminate between buyers who are more price conscious
and those who can afford to spend extra. If Toyota makes the base-
model Camry too nice, more people will opt for that cheaper version.
By reserving some features for the more expensive trim packages, Toyota
may be able to boost its revenue by tempting more people to buy those
more expensive (and profitable) Camrys.

137
Understanding Economics: Game Theory Answers

LESSON 10
1. This won’t always be the case. It’s true, of course, that the winner of
a second-price auction pays the second-highest bid, but that doesn’t
necessarily mean lower revenue for the seller. That’s because the second-
price auction is demand revealing, while the first-price auction isn’t.
In other words, bidders in a second-price auction have an incentive to
submit bids equal to what they’re truly willing to pay for a product.
Bidders in a first-price auction, on the other hand, have an incentive
to submit bids that are less than what they’re truly willing to pay. The
second-highest bid from an auction where everyone bids their full
willingness to pay won’t necessarily be less than the highest bid from
an auction where everyone bids less.
2 .
Second-price
English auction
sealed-bid auction

Winner You You

Price paid $3,000 + e $3,000

You win in either case. In the English auction, the auctioneer keeps
naming higher prices until Cindy, Bob, and Ann drop out. Ann stays
in the auction until the price the auctioneer calls out exceeds what
she’s willing to pay. That means you end up paying $3,000 + e, where
e is the smallest increment the auctioneer uses between prices. In the
second-price auction, everyone writes down their true willingness to pay
(remember that it’s demand revealing). You win since you submit the
highest bid, and you pay a price equal to Ann’s bid. In this case, that’s
$3,000.

138 138
Answers

LESSON 11
1. Assuming all the bidders’ thrift stores are similar, this really is a
common-value auction. Each store owner has a best guess as to the
value of a storage locker’s contents based on the quick peek. On average,
these guesses may be quite accurate, but the highest guess is likely to
be an overestimate of the contents’ value. If bidders don’t take that into
account, they’re likely to suffer from the winner’s curse.
2 . If you bid in this all-pay auction by hiring a legal team for $2 million,
you have a one-third chance of coming away with a patent that entitles
you to $9 million in profits and a two-thirds chance of coming away
emptyhanded. Win or lose, you have to pay $2 million in legal fees.
That means your expected payoff is

! #
𝐸𝐸(𝜋𝜋) = ($9 million) + ($0) − $2 million = $1 million.
" "
That’s greater than the $0 payoff you receive if you choose not to bid
at all.
𝑑𝑑𝑑𝑑!
= 4 − 2𝜃𝜃
LESSON 12 𝑑𝑑𝑑𝑑
1. Start with the octopus. Because the grouper’s effort
𝑑𝑑𝑑𝑑!
4 −into
𝑑𝑑𝑑𝑑!doesn’t enter 2𝜃𝜃the= 0
= 4 − 2𝜃𝜃 = 4 − 2𝜃𝜃
octopus’s payoff, the octopus’s optimization problem is straightforward.
𝑑𝑑𝑑𝑑 function𝑑𝑑𝑑𝑑
First, take the derivative of its payoff with respect to 𝜃𝜃 := 2
!𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 4 − 2𝜃𝜃 = 40 − 2𝜃𝜃 𝑑𝑑𝑑𝑑= 0
!
! 4=
= −42𝜃𝜃
− 2𝜃𝜃. "
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 4 − 2𝜃𝜃
𝑑𝑑𝑑𝑑 = 4 + 2𝜃𝜃 − 2𝑔𝑔
Next, set this equal to 0 and solve for 𝜃𝜃 := 2 𝜃𝜃 =𝑑𝑑𝑑𝑑
2
4 −42𝜃𝜃
4 − 2𝜃𝜃 = 0
−= 2𝜃𝜃0= 0
𝑑𝑑𝑑𝑑" 𝑑𝑑𝑑𝑑" 4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
𝜃𝜃 𝜃𝜃=𝜃𝜃=
2=.2 2 = 4 + 2𝜃𝜃 = − 42𝑔𝑔+ 2𝜃𝜃 − 2𝑔𝑔
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
𝑔𝑔 = 2 + 𝜃𝜃
𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 So" the octopus should spend two hours per day hunting.
" "
= 4=+ 4 +
2𝜃𝜃 2𝜃𝜃2𝑔𝑔
− − 2𝑔𝑔 4 + 2𝜃𝜃 − 42𝑔𝑔+= 2𝜃𝜃0− 2𝑔𝑔 = 0
𝑑𝑑𝑑𝑑 = 4you
+ 2𝜃𝜃
can−do2𝑔𝑔 𝜃𝜃 = 2
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 Now, something similar for the grouper by taking the
= 2 +to𝜃𝜃𝑔𝑔. = 2 + 𝜃𝜃
𝑔𝑔respect
derivative of its payoff function with
4+ 4 +
2𝜃𝜃 −2𝜃𝜃 −
2𝑔𝑔
4 + 2𝜃𝜃 − 2𝑔𝑔 = 0 2𝑔𝑔
= 0 = 0
𝜃𝜃 = 2 𝜃𝜃 = 2
𝑔𝑔 𝑔𝑔=𝑔𝑔=2=+ 2𝜃𝜃+ 𝜃𝜃 𝜋𝜋! = (4𝜃𝜃) − 𝜃𝜃 # = (4 × 2) − 2
2 + 𝜃𝜃 139
#
𝜃𝜃 =𝜃𝜃2= 2 𝜋𝜋" = (4𝑔𝑔 + 2𝜃𝜃 + 2𝑔𝑔𝑔𝑔) − 𝑔𝑔 = (4 × 4 + 2 × 2 +
𝑑𝑑𝑑𝑑!𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑!
=! 4=−42𝜃𝜃 − 2𝜃𝜃 𝑑𝑑𝑑𝑑 = 4 − 2𝜃𝜃 𝜃𝜃 = 2
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑4 − 2𝜃𝜃 = 0
Understanding Economics:
𝑑𝑑𝑑𝑑! 𝑑𝑑𝑑𝑑"𝑑𝑑𝑑𝑑Theory
= 4 − 2𝜃𝜃 Game ! Answers
4 −42𝜃𝜃 − 2𝜃𝜃= 0
𝜃𝜃 ==𝑑𝑑𝑑𝑑20 4 − 2𝜃𝜃 = 0 𝑑𝑑𝑑𝑑 = 4=+42𝜃𝜃 − 2𝜃𝜃
− 2𝑔𝑔
𝑑𝑑𝑑𝑑! 𝑑𝑑𝑑𝑑
𝜃𝜃 = =
2 4 − 2𝜃𝜃 𝜃𝜃 = 2
" 𝜃𝜃 = 2 4 − 2𝜃𝜃 = 0
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑 = 4 + 2𝜃𝜃 − 2𝑔𝑔. 4 +42𝜃𝜃 − 2𝜃𝜃 = 0= 0
− 2𝑔𝑔
𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑"𝑑𝑑𝑑𝑑" 4 − !
4 −=𝑑𝑑𝑑𝑑
= 2𝜃𝜃 2𝜃𝜃 0𝜃𝜃"2𝑔𝑔
= 24 + 2𝜃𝜃 − 2𝑔𝑔 𝜃𝜃 = 2
=𝑑𝑑𝑑𝑑
Setting 4=+4 2𝜃𝜃
this − 2𝑔𝑔
+equal
2𝜃𝜃 −
𝑑𝑑𝑑𝑑to 0=and solving for 𝑔𝑔 gives
= 2 you
+ 𝜃𝜃the grouper’s best-
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
response 𝜃𝜃rule:
4 − 2𝜃𝜃 𝑑𝑑𝑑𝑑
== "2 0 𝑑𝑑𝑑𝑑"
4 +42𝜃𝜃 + 2𝜃𝜃 −𝑔𝑔2𝑔𝑔= =+=
−𝑑𝑑𝑑𝑑
2𝑔𝑔
2 0= 4𝜃𝜃4++ 2𝜃𝜃 − 2𝑔𝑔
0 2𝜃𝜃 − 2𝑔𝑔𝑑𝑑𝑑𝑑 = 0= 4 +𝜃𝜃2𝜃𝜃 =− 2 2𝑔𝑔
𝑑𝑑𝑑𝑑"
𝜃𝜃 = 22𝜃𝜃 − 2𝑔𝑔
𝑔𝑔 =𝑔𝑔=2=4+2+𝜃𝜃
𝑑𝑑𝑑𝑑 4+ +𝜃𝜃=
.𝜃𝜃 2𝜃𝜃2− 𝑔𝑔 2𝑔𝑔
= 2=+0𝜃𝜃 𝑑𝑑𝑑𝑑!
4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
𝑑𝑑𝑑𝑑" # # = 4 − 2𝜃𝜃
Because = 2𝜃𝜃 4𝜃𝜃you
+= 2𝜃𝜃
2 −2==
already 𝜋𝜋
2𝑔𝑔
know! = 𝜃𝜃 (4𝜃𝜃)
= 2− 𝜃𝜃 =
, that (4 ×that
means 2) −
at 2 =
the 𝑑𝑑𝑑𝑑4
Nash
𝑑𝑑𝑑𝑑4 + −𝜃𝜃 =
2𝑔𝑔 𝑔𝑔 02 + 𝜃𝜃 𝑔𝑔 = 2per+day
𝜃𝜃 and the grouper
equilibrium, the octopus hunts for two hours
𝜋𝜋 = (4𝑔𝑔
" for #four.
hunts + 2𝜃𝜃 + 2𝑔𝑔𝑔𝑔) − 𝑔𝑔 #
= (4 × 4 + 2 × 2 + 24×−42𝜃𝜃 =−
× 2) 04
𝜋𝜋! = (4𝜃𝜃)4 +−2𝜃𝜃 𝑔𝑔𝜃𝜃=−= 22𝑔𝑔+(4=𝜃𝜃×02)𝜃𝜃−=2#2 = 4 𝜃𝜃 = 2
2 . This is𝜋𝜋#simply
=× a matter
(4𝜃𝜃) 𝜃𝜃#of#= entering
# 4(4 ×the 2) Nash
− 2#equilibrium levels of 𝜃𝜃 = 2
𝜋𝜋! +
2𝜃𝜃 𝜋𝜋 (4𝜃𝜃)
=!2𝑔𝑔𝑔𝑔)
= (4𝜃𝜃)− 𝑔𝑔
− 𝜃𝜃−## 𝜃𝜃
and
=
𝑔𝑔=from
=
(4
!(4=2× (4
+𝜃𝜃4
the
2)

𝜃𝜃+− 2−
2)
previous22− 2=
× problem
2 +=2 4×into 4× each
=4
− 4# =payoff
2)creature’s 20. function:
(4𝑔𝑔 # # 2#× 4 × 2)𝑑𝑑𝑑𝑑 −#"4#==4 20.
"+=2𝑔𝑔𝑔𝑔)
2𝑔𝑔𝑔𝑔) +# 𝑔𝑔
− 𝑔𝑔− 2𝜃𝜃
=𝜋𝜋#!(4
=+=× 2𝑔𝑔𝑔𝑔)
(4 4×+4−
(4𝜃𝜃) −2+𝑔𝑔
×𝜃𝜃22#2×=+𝜋𝜋2(4
= 2+×
(4 ×
=
42)+4−2)
×24(4𝜃𝜃)
×× 2×2×−

#
2)
𝜃𝜃
2=
4−
# +
=4=
4(420.
=×20.2) − + 2𝜃𝜃 − 2
𝜃𝜃 = ! 𝑑𝑑𝑑𝑑 = 4
2
(4𝑔𝑔𝜋𝜋+! 2𝜃𝜃 # 𝑔𝑔#(4 42#+=220.
+ 𝜋𝜋
= (4𝜃𝜃) 2𝑔𝑔𝑔𝑔)−=𝜃𝜃(4𝑔𝑔 − = +=2𝜃𝜃 ×(42) +×2𝑔𝑔𝑔𝑔)
−4 2+# 2= −×𝑔𝑔
42# + = 2(4××44×+2)2 − ×
" 4 +× 2𝜃𝜃4 −
× 2𝑔𝑔
2) −= 40
𝜃𝜃 +𝜋𝜋2𝑔𝑔𝑔𝑔) − 𝑔𝑔# = #(4 × 4 + 2 × 2 +
! = (4𝜃𝜃) − 𝜃𝜃 = (4 × 2) − 2 = 4
# 2 × 4 × 2) − 4# = 20..
𝑔𝑔 = 2 + 𝜃𝜃

# At the Nash equilibrium, the octopus earns a payoff of 4 and the grouper
= a(4payoff
+ 2𝑔𝑔𝑔𝑔) − 𝑔𝑔earns × 4 of 2 × 2 + 2 × 4 × 2) − 4# = 20.
+20. 𝜃𝜃 = 2

𝜋𝜋! = (4𝜃𝜃) − 𝜃𝜃 # = (4 × 2)

𝜋𝜋" = (4𝑔𝑔 + 2𝜃𝜃 + 2𝑔𝑔𝑔𝑔) − 𝑔𝑔# = (4 × 4 + 2 ×

IMAGE CREDITS
Title graphics and backgrounds:
Lepusinensis/iStock /Getty Images.
Siminitzki/iStock /Getty Images.

140 140

You might also like