You are on page 1of 63

Workbook for Political Economics

December 29, 2009


This set of sample exercises has been created for the undergraduate course
JEB064 Political Economics given by Martin Gregor at IES, Charles Univer-
sity, Prague. With exception of currently assigned homeworks, each exercise
includes a full solution. The workbook is a work in permanent construction;
any comment is more than welcome.

1 Essentials in game theory


1.1 Centipede game (Rasmusen 2007)
Use backward induction to find the equilibrium of the following game (the hor-
izontal arrows mean ‘Pass’ and the vertical arrows are for ’Take’):

SOLUTION This game has the following interpretation: Think of two play-
ers who work on a joint project, with initial value 5. A player has an opportunity
to finish the project (’Take’) and divide the value 4:1. If not (’Pass’), there is
a next round, where the value doubles, but the opportunity to finish is now in
the hands of the other player. Maturity of the project is 7 rounds. Thus, if the
project gets to the very end, the final prize is 5 · 27 = 640.
Intuitively, if the current values is v > 0, and I expect the other player
to finish in the next round, I compare 4v/5 (4:1) with 2v/5 (1:4, but value is
double), hence I finish myself. Thus, we should expect ’Take’ to be played by
both players in all nodes, and the equilibrium payoff should be (4, 1). So, the
project should be immediately finished.
Formally, this is an extensive game with complete information. In these
games, we solve for a subgame-perfect Nash equilibrium, that is identified by
backward induction. Backward induction states that A plays ’Take’ in the last
node (255 > 128), B plays ’Take’ in the node preceding the last node (128 > 64),
and so on and so forth.
We may try to look for non-subgame-perfect equilibria. (Recall bailouts,
where the politician commits not to subsidize a bad project, and this leads the
manager to finance a good project. The point was that the politician announced
a move that is not ex post credible, i.e. he announces not to bail out a bad
project.) Similarly, in our case, we would need that A announces a move that

1
is not ex post credible. That is, to play ’Pass’ in the final node. This could
change B’s previous move to ’Pass’ and vice versa.
Anyway, in this kind of Centipede game, this cannot be an equilibrium.
Why? An equilibrium is characterized such that there is no opportunity for
unilateral deviation. Think of a profile of strategies with ’Pass’ everywhere, and
the game ending in payoffs (128, 512). There is no opportunity for deviations
of player B (8 < 32 < 128 < 512). For player A, there is no opportunity
for deviation only in early rounds (4 < 16 < 64 < 128); however, there is an
opportunity in the final node, where 128 < 256. Hence, A must play ’Take’ in
the last node. By analogy, we can easily derive that also strategy profiles that
would end up sooner cannot be equilibrium.
Can you see why the search for non-subgame perfect equilibrium is here
futile, unlike in bailout game? The difference to bailout is that there, the
not-ex-post-credible move was not played in equilibrium. In other words, if
the politician announced not to bailout, and the manager believed that, the
politician would not have to prove that he or she really does not bailout, because
there would not be any bad project. In contrast to that, if A announces to ’Pass’
in the last node, he or she must prove this by play. So, in the previous case, the
not ex-post credible move served only as a threat that is not carried out. Here,
the not ex-post credible move serves as a promise that must be carried out.
To sum up, Centipede game features a single, subgame-perfect equilibrium,
which is inefficient (because 4 < 128 and 1 < 512). Efficiency can be improved
only if the players (i) either play this game repeatedly, or (ii) dispose with some
cooperative devices (e.g., possibility to write a contract that will be enforced
by a third party). This is also the problem with experimental tests of the
closely related Ultimatum game (I recommend Chapter 3 in Ken Binmore’s
lovely book Does Game Theory Work? Bargaining Challenge, MIT Press, 2007).
Participants in experiments do not play the equilibrium as we describe it, but
we don’t know if they are really maximizing payoff in this game, or if they
are misled by thinking that they should play a repeated game with threats
and retaliations, where ’Pass’ can be an equilibrium move. And why are they
misled? Because in real life, we typically play repeated games, not only single
games where we meet a partner and when the game ends, he or she forever
disappears in void.

1.2 Dirty campaign (lecture notes by Boehmke)


Consider a situation facing two competing candidates for a seat in the Senate
in Round 2 of run-off elections (in run-off elections, two best candidates pass
from Round 1 to Round 2). They can either spend money on negative campaign
advertising, or they can hold to their pledge not to engage in “mud-slinging”.
If one candidate breaks their pledge, independent voters may hold it against
him, but if both break their pledges, most independent voters will surely be
disgusted and stay home. However, these candidates only really care about
getting elected, not about hurting voters belief in democracy.
Since the Round 2 follows Round 1 by only one week, we can treat their

2
decisions as simultaneous–neither candidate would have time to prepare dirty
ads if they have not done so already. If both candidates keep it clean, the
incumbent will win 60% of the time. If the challenger is the only one to get
dirty, the incumbent gets embarrassed and only has a 45% chance of winning.
If only the incumbent goes dirty, the challenger looks sort of like an okay guy
and the incumbent will have only a 55% chance of winning. Lastly, if they both
start slinging mud, independent voters are so disgusted they all flip coins and
each candidate has 50% chance of winning.
Draw the normal form representation (game matrix) of this simultaneous
move game and find the equilibrium.

SOLUTION It is only important to carefully reflect assumptions and rewrite


them into the payoffs.

Table 1: Chances of winning (game or payoff matrix)

Incumbent/Challenger dirty clean


dirty 50, 50 55,45
clean 45, 55 60, 40

The best responses are emphasized. Equilibrium in pure strategies is such


that no player has a possibility for unilateral deviation, hence an equilibrium
involves strategies that are best responses to best responses of other players, from
the perspective of all players. In our case, this is only (dirty, dirty) strategy
profile, with equal chance of win for both candidates. Independent voters stay
home, watching TV, desperate and disgusted from democratic politics.

1.3 George W. Bush in 2004 (lecture notes by Boehmke)


Use a time machine to get back to year 2004. The current US President, George
W. Bush, is trying to achieve legislation that will enact (i) increased defense to
fight terror, (ii) economic stimulus to help the economy, and (iii) money to fight
AIDS. Unfortunately, with a slight majority in the Senate, the Republicans will
only be able to pass one of these issues in order to keep the budget deficit down.
Knowing that once he says that he will compromise on the policies, there is
no going back, GWB must choose wisely. If he offers to compromise on AIDS and
the Democrats agree, he has a 52% chance of getting re-elected in 2004, which
means that the Democrats have a 48% chance of winning. If the Democrats do
not agree, Bushs hopes jump to 57%, leaving the Democrats chances at 43%. If
GWB offers to compromise on the defense budget for fighting terrorism he has
a 55% chance of winning re-election if the Democrats cooperate, but if they fail
to do so almost everyone will vote Republican, giving Bush a 68% chance of a
second term.
If, however, the President chooses economic stimulus, the Democrats can
expect a 55% chance of their candidate winning if they refuse to be cooperative,

3
since the economy will stay down the tubes and everyone will blame GWBs
elimination of dividend taxation, but if the Democrats do the responsible thing
and cooperate, they will only have 45% chance of winning.
Draw the extensive form of the game just described, with the President
moving first. Assume that Bush only cares about the chance of avenging his
daddys loss in 1992 and the Democrats only care about their own chances. Using
these payoffs, solve by backward induction for the equilibrium of the game and
write out the equilibrium strategies for both players.

SOLUTION GWB play either of three policies in the initial node. Democrats
(D) play subsequently. Democrat’s best responses are emphasized.
Following these best responses, GWB’s best response is also emphasized.
This gives a subgame-perfect Nash equilibrium, with solid lines denoting strat-
egy profile. It involves compromise on defense spending, yielding payoffs (55, 45).

Figure 1: George W. Bush vs Democrats

Can we think of a non-subgame perfect equilibrium? To check for exis-


tence, we may transform an extensive game to a simultaneous game. How to
do that? Well, simply construct all strategies of the players. For GWB, the
set of strategies is compromise on {AIDS, def ense, economy}. For D, a strat-
egy, by definition of strategy, provides a full guide (manual) for behavior in
all nodes. Since there are three nodes, each with action (yes, no), a strat-
egy describes (dis)agreement for each of the three nodes. The set is therefore
{yyy, yyn, ynn, nyy, nyn, nny, nnn}. The final step is to construct a payoff ma-
trix for any strategy profile, i.e. for any combination of strategies.

Table 2: Payoff matrix in an equivalent simultaneous game

GWB/D yyy yyn ynn nyy nny nnn


AIDS 52, 48 52, 48 52, 48 57, 43 57, 43 57, 43

4
Table 2: Payoff matrix in an equivalent simultaneous game

defense 55, 45 55, 45 68, 32 55, 45 68, 32 68, 32


economy 55, 45 45, 55 45, 55 55, 45 55, 45 45, 55

As usual, best responses are in italic. For convenience, the equilibria are in
addition in bold . It is straightforward that in all equilibria, we have that GWB
chooses defense spending and the Democrats agree. Thus, all equilibria yields
identical outcome as the subgame-perfect equilibrium.

1.4 The Monty Hall problem (Rasmusen 2007)


You are a contestant on the TV show, “Let’s Make a Deal.” You face three
curtains, labelled A, B and C. Behind two of them are toasters, and behind
the third is a Mazda Miata car. You choose A, and the TV showmaster says,
pulling curtain B aside to reveal a toaster, “You’re lucky you didn’t choose B,
but before I show you what is behind the other two curtains, would you like to
change from curtain A to curtain C?” Should you switch? What is the exact
probability that curtain C hides the car?

SOLUTION A critical assumption is that the only goal of the showmaster is


to make a show for the spectators, not to save Mazda for the next round. Once
you recognize that, the rest not so difficult.
For the showmaker, to make a show means to give you a chance for reconsid-
eration of choice. Notice that show is always possible, and we can even exactly
say what will be revealed. How come? Suppose the car is in A. If your first
choice is A (so you hit the car), then the showmaker reveals randomly B or
C, each with probability 12 . If your first choice is B, he must reveal C with
probability 1 (he can’t reveal B, that you selected, or A, where the car actually
is). If you first choice is C, he must reveal B with probability 1. The following
table illustrates what will be revealed under any combination of your choice and
the true location of Mazda. In the table, we use that a priori, you are com-
pletely uncertain about where the car could be, so you must treat each curtain
symmetrically, with the same apriori probability 1/3:

Table 3: Which curtain is revealed?

You/Mazda A ( 31 ) B ( 13 ) C ( 31 )
A B ( 12 ), C ( 12 ) C B
B C A ( 21 ), C ( 12 ) A
C B A A ( 21 ), B ( 12 )

Coming back to our example: You chose A, and B was revealed. You want
to find out the posterior probability that Mazda is in A, when you selected A

5
and B was revealed by the showmaker.1
You have to apply Bayes’ rule. It states (recall Rasmusen, p. 57 in 4th ed)
that observing actions (here, a showmaker’s revelation) helps you in updating
beliefs.

(Likelihood of Player’s Move)(Prior for Nature’s Move)


Posterior for Nature’s Move =
Marginal Likelihood of Player’s Move
In our example, the updated (posterior) belief is as follows:

Pr(B revealed|Mazda in A) Pr(Mazda in A)


Pr(Mazda in A|B revealed) =
Marginal Likelihood of Player’s Move
The marginal likelihood gives you a “total” probability that B is revealed,
which rewrites into Pr(B revealed|Mazda in A)·Pr(Mazda in A)+Pr(B revealed|Mazda in B)·
Pr(Mazda in B) + Pr(B revealed|Mazda in C) · Pr(Mazda in C) = 21 · 13 + 0 · 31 +
1 · 13 = 1/2. Hence:

1/6 1
Pr(Mazda in A|B revealed) = =
1/2 3
0
Pr(Mazda in B|B revealed) = =0
1/2
1/3 2
Pr(Mazda in C|B revealed) = =
1/2 3
This clearly shows that if B is revealed, a posterior belief on A is less that the
prior belief, whereas posterior belief on C is more that the prior belief. So, you
should revise the choice towards C once B is revealed, and your initial choice is A.
A webpage http://www.stat.sc.edu/∼west/javahtml/LetsMakeaDeal.html offers a
(hopefully functioning) Java applet to enjoy this (a little bit) silly game.

1.5 Elmer’s apple pie (Rasmusen 2007)


Mrs Jones has made an apple pie for her son, Elmer, and she is trying to figure
out whether the pie tasted divine, or merely good. Her pies turn out divinely a
third of the time. Elmer might be ravenous, or merely hungry, and he will eat
either 2, 3, or 4 pieces of pie. Mrs Jones knows he is ravenous half the time (but
not which half). If the pie is divine, then, if Elmer is hungry, the probabilities
of the three consumptions are (0, 0.6, 0.4), but if he is ravenous the probabilities
are (0, 0, 1). If the pie is just good, then the probabilities are (0.2, 0.4, 0.4) if he
is hungry and (0.1, 0.3, 0.6) if he is ravenous.
Elmer is a sensitive, but useless, boy. He will always say that the pie is
divine and his appetite weak, regardless of his true inner feelings.
1 For simplification, I will omit in all expressions a condition that “A is chosen”; this is

possible because in the derivation of probabilities, we don’t have to care at all about what
happens in the case of your other choices.

6
a) What is the probability that he will eat four pieces of pie?

b) If Mrs Jones sees Elmer eat four pieces of pie, what is the probability that
he is ravenous and the pie is merely good?

c) If Mrs Jones sees Elmer eat four pieces of pie, what is the probability that
the pie is divine?

SOLUTION It is useful to write Elmer’s choice over actions (how much to


eat, {2, 3, 4}) in a table, for each combination of pie quality (divine/good) and
hungriness (ravenous/hungry).

Table 4: Elmer’s choice over 2, 3, or 4 pieces

divine ( 31 ) good ( 23 )
ravenous( 12 ) 0, 0, 1 1
10 ,
3
10 ,
6
10
hungry ( 12 ) 6
0, 10 4
, 10 2
10 ,
4
10 ,
4
10

Answer a): The probability that he will eat four pieces of pie is the marginal
likelihood of eating 4 pieces, 31 · 12 · 1 + 32 · 12 · 10
6
+ 31 · 21 · 10
4
+ 23 · 12 · 10
4
= 17
30 .
Answer b): In Bayes rule, the nominator for this case is 3 · 2 · 10 = 51 . The
2 1 6

posterior probability of having ravenous Elmer and good pie, when Elmer eats
4 pieces, is therefore
1/5 6
= .
17/30 17
Answer c): In Bayes rule, the nominator for this case is 31 · 21 ·1+ 13 · 21 · 10
4 7
= 30 .
The posterior probability of having a divine pie, when Elmer eats 4 pieces, is

7/30 7
= .
17/30 17

1.6 Cancer tests (McMillan 1992)


Imagine that you are being tested for cancer, using a test that is 98 percent
accurate. If you indeed have cancer, the test shows positive (indicating cancer)
98 percent of the time. If you do not have cancer, it shows negative 98 percent
of the time. You have heard that 1 in 20 people in the population actually have
cancer. Now your doctor tells you that you tested positive, but you shouldn’t
worry because his last 19 patients all died. How worried should you be? What
is the probability you have cancer?

SOLUTION Again, a table is useful.

Table 5: Indication of cancer tests

Reality/Test Positive/Negative

7
Table 5: Indication of cancer tests

1 49 1
Cancer ( 20 ) 50 , 50
No cancer ( 19
20 )
1
50 ,
49
50

First, derive the marginal likelihood of getting a positive test: Pr(Positive) =


49
Pr(Positive|Cancer) · Pr(Cancer) + Pr(Positive|No cancer) · Pr(No cancer) = 50 ·
1 1 19 17
20 + ·
50 20 = 250 . By Bayes rule,

Pr(Positive|Cancer) · Pr(Cancer) 49 .
Pr(Cancer|Positive) = = = 0.72.
Pr(Positive) 68
Contrary to the doctor’s claim, there is a high probability of having cancer.
Notice one interesting point: If Type I and Type II errors (false positivity,
false negativity) are close to each other (in our case, they are identical, 2%),
then it is not very useful to test populations where a priori distribution is very
asymmetric. Rasmusen mentions, for instance, HIV testing of an entire popu-
lation. The problem is that in a large subpopulation of non-infected, a small
error will bring a large amount of false positives. This large amount will make
it difficult to distinguish between true and false positives.
In our case, the share of false positives is already high, at 38%. But it
may even increase if the probability of cancer in population drops (hence, if
the population is very asymmetric). For instance, if the probability of cancer is
1/100 instead of 1/20, we have the marginal likelihood 37/250. The probability
.
of having cancer when observing positive test will be only 49/148 = 0.33, hence
the share of false positives is extremely high, at 67%. The message is that testing
is more precise in (high-risk) subpopulations where probability of infection is
larger, since the share of false positives is much lower here.

1.7 The Battleship Problem (Nalebuff 1988)


The Pentagon has the choice of building one battleship or two cruisers. One
battleship costs the same as two cruisers, but a cruiser is sufficient to carry out
the navy’s mission—if the cruiser survives to get close enough to the target.
The battleship has a probability of p of carrying out its mission, whereas a
cruiser only has probability p/2. Whatever the outcome, the war ends and any
surviving ships are scrapped. Which option is superior?

SOLUTION Either the mission is successful or not. For the battleship,


Pr(success) = p > 0. For the cruisers, we have that a first cruiser tries to
complete the mission, and if fails, the second cruiser attempts to complete. For
the cruisers:

Pr(success) = Pr(C1 success) + Pr(C1 failure) · Pr(C2 success)


p  p p p2
= + 1− =p− <p
2 2 2 4

8
It is better to build a single battleship. Notice that this has been under
assumption that we don’t care about saving the cost of the second cruiser, once
the first cruiser is successful.
If we care, we have to introduce a cost c per cruiser, and also a value of
mission v. To have a battleship brings expected payoff pv −2c. To have cruisers,
we have expected payoff consisting of three states of the world: (i) success of
cruiser 1 (no need for further investment), (ii) success of cruiser 2, and (iii)
failure of cruiser 2:
p  p p  p  p  p
(v−c)+ 1 − (v−2c)+ 1 − 1− (−2c) = (pv−c) 1 − < pv−c
2 2 2 2 2 4
We use that the expected payoff of the battleship must be positive, pv−c > 0.
To sum up, the battleship is better even if we account for the expected saving
of the cost of cruiser 2, which occurs with probability p/2.

1.8 Joint ventures (Rasmusen 2007)


Software Inc. and Hardware Inc. have formed a joint venture. Each can exert
either high or low effort, which is equivalent to costs of 20 and 0. Hardware
moves first, but Software cannot observe his effort. Revenues are split equally at
the end, and the two firms are risk neutral. If both firms exert low effort, total
revenues are 100. If the parts are defective, the total revenue is 100; otherwise,
if both exert high effort, revenue is 200, but if only one player does, revenue is
100 with probability 0.9 and 200 with probability 0.1. Before they start, both
players believe that the probability of defective parts is 0.7. Hardware discovers
the truth about the parts by observation before he chooses effort, but Software
does not.

a) Draw the extensive form and put lines around the information sets of
Software at any nodes at which he moves.

b) What is the Nash equilibrium?

c) What is Software’s belief, in equilibrium, as to the probability that Hard-


ware chooses low effort?

d) If Software sees that revenue is 100, what probability does he assign to


defective parts if he himself exerted high effort and he believes that Hard-
ware chose low effort?

SOLUTION To start with, consider the payoffs for all combinations of effort
and defection/non-defection in components. We use that the expected payoff for
the case with non-defective parts and single high effort is 0.9·100+0.1·200 = 110.
See the tables, where row is for Hardware, and column for Software.

Table 6: Expected total revenues

9
Table 6: Expected total revenues

3
non-defective ( 10 ) low high
low 100 0.9 · 100 + 0.1 · 200
high 0.9 · 100 + 0.1 · 200 200
7
defective ( 10 ) low high
low 100 100
high 100 100

Dividing revenues equally, and inserting the cost for effort, we get the payoffs.
These are also provided in the game tree, where uncertainty of Software (they
observe neither Nature’s move, nor Hardware’s move) is depicted by having a
single information set.

Table 7: Expected payoffs

3
non-defective ( 10 ) low high
low 50, 50 55, 35
high 35, 55 80, 80
7
defective ( 10 ) low high
low 50, 50 50, 30
high 30, 50 30, 30

To solve these games, we proceed by backward induction. Denote p ∈ [0, 1]


the probability that Hardware plays high effort if parts (components) are non-
defective and q ∈ [0, 1] if defective. Software calculates that with probability (i)
0.3(1 − p) he faces the first node (non-defective parts, low Hardware’s payoff),
(ii) 0.3p the second node, (iii) 0.7(1 − q) the third node, and (iv) 0.7q the fourth
node. Thus, Software prefers high effort to low effort if expected payoff from
high effort exceeds expected payoff from low effort,

0.3(1 − p)35 + 0.3p · 80 + 0.7 · 30 ≥ 0.3(1 − p)50 + 0.3p · 55 + 0.7 · 50.

This rewrites to p ≥ 185/120 > 1, which is impossible. Hence, we know


that Software plays low effort irrespective of the play of Hardware. Hardware
anticipates low effort of Software. For non-defective parts, this means that low
effort is better, p = 0 (50 > 35). For defective parts, this means that low effort
is again better, q = 0 (50 > 30). Thus, Nash equilibrium is characterized such
that Hardware and Software always exert low effort. For answer c), the equi-
librium Software’s belief on the Hardware’s low effort is one.

Answer d): One has to be careful. The combination of low effort of Hard-
ware and high effort of Software gives 100 in two cases: (i) non-defective parts,

10
Figure 2: Joint venture: a game tree

but only with probability 0.9, and (ii) defective parts, with probability 1. Thus,
conditional probability is 0.3 · 0.9 + 0.7 · 1 = 0.91. The posterior probability
.
of having defective parts is thus 0.7/0.91 = 0.72. (Notice that this is not an
equilibrium probability, because both players play always low effort. Here, total
revenues are always 100, so Software keeps initial beliefs, and he assigns the
probability to defective parts 7/10.)

1.9 Political consulting


You are a foreign investor and you need to bribe a decisive senior bureaucrat.
There are 2 bureaucrats (A, B), both looking identically important. Bribing a
single bureaucrat costs you b. You can hire a political consultant who charges
c < b/2 for a recommendation. He certainly knows who is the decisive bureau-
crat but you cannot be sure about his advice because he charges his money
before you bribe. But, if he has to select from a set of alternatives, all with
exactly identical monetary payoffs, he recommends the truthful one. If you
happen to bribe a non-decisive bureaucrat, you have to proceed in looking for
the decisive bureaucrat (again, you decide first whether to hire a consultant,
and then whether to follow his advice or not).
1. What are your prior (initial) beliefs that bureaucrat A is decisive?
2. Draw a game tree of this extensive game.
3. By backward induction, identify and fully describe an equilibrium.

11
4. In the equilibrium, are you purchasing advise of the consultant? If so,
once or even twice? Are you following the consultant’s advice?

5. When consultant recommends A, what is your posterior belief that bu-


reaucrat A is decisive?

SOLUTION Since both bureaucrats look identically, you must treat them
symmetrically, so your priors are 1/2 for each. The game tree in its entirety
is complex, so we help ourselves by solving all subgames that follow when (i)
you select bureaucrat X ∈ {A, B}, and he or she is not decisive. Denote your
expected equilibrium value of any of such a subgame as E. In such a subgame,
posterior belief that X is decisive is zero, and posterior belief that Y is decisive
is one. Thus, playing X leads to repetition of the subgame, only the payoff
declines by b to E − b (or E − b − c, if the consultant is paid). Playing Y
terminates the game with payoff −b (or −b − c, if the consultant is paid). It is
clear that E < 0, so you select Y in all nodes. By backward induction, you don’t
invest, then play Y , and the equilibrium payoff in this subgame is E = −b < 0
for you and zero for the consultant.

Figure 3: Subgame with indecisive X in Round 1

We enter this equilibrium payoff into the entire game. The full game starts
by a play of Nature, that appoints bureaucrat A to be decisive with probability
1/2 and bureaucrat B with probability 1/2. Then you decide on investing. In the
case of no investment, your prior beliefs remain identical (nothing is observed, so
you couldn’t update them). In such a case, you play a lottery with probabilities
1/2 over A or B being decisive. A correct pick gives you −b, and a wrong pick
gives you −b+E = −2b. Your expected payoff of playing A is −b 12 −2b 12 = − 23 b,

12
and your expected payoff of playing B is also −b 12 − 2b 21 = − 32 b. Thus, you get
always − 32 b, so any mixing can be played in equilibrium.
To solve the game, you help yourself by considering that a consultant, facing
identical payoffs, reveals the truth with probability one. Since the consultant
indeed always faces identical payoffs (in subgame of Round 2, you never pay the
consultant), the consultant’s action is a perfectly revealing signal of the state
of Nature, and your posterior belief on X ∈ {A, B} being decisive, when ob-
taining recommendation on X, must be one. The same could be obtained more
formally, using Bayes rule and the fact that Pr(Xrecommended|Xdecisive) = 1
and Pr(Xrecommended|Y decisive) = 0.

Figure 4: The game tree with equilibrium values in the subgames

To summarize the equilibrium path (depicted by solid lines): You invest, the
consultant recommends a decisive bureaucrat, and you follow the advice. The
payoffs (in bold) are (−b − c, c), irrespective of the state of Nature.

13
2 Collective choice: preferences
2.1 Asymmetric utilities
We have a policy t ≥ 0, and two individuals, A and B, with the following indirect
utility functions over the policy:

uA (t) = 2 t − t,
uB (t) = 1 − (b − t)2 , b > 1.

1. Derive bliss points of the individuals, (t∗A , t∗B ) and prove that t∗B > t∗A .
Prove that the preferences are quasiconcave.
2. Characterize all pairs of proposals, 0 ≤ x < t∗A < t∗B < y, that simultane-
ously satisfy uB (x) > uB (y) and uA (y) > uA (x).
3. Characterize a necessary condition for the existence of a pair of proposals
defined above.

SOLUTION
1. By FOCs, (t∗A , t∗B ) = (1, b), where by assumption b > 1. Concavity of both
utility functions, u00A (t) = − 2t13/2 < 0, u00B (t) = −2 < 0 implies quasicon-
cavity.
2. We look for all pairs (x, y) that satisfy all the conditions, so one of the
ways to identify the pairs is to fix x and look for those y that satisfy the
conditions (and then repeat this for any possible √x). So, we fix x and use

uA (y) > uA (x). This implies that 2 y − y > 2 x − x = k, where k is
utility of A-individual associated with policy x. Notice that by t ≥ 0, we
have k ∈ [0, 1]. By solving the inequality, we obtain
√ √
2 − k − 2 1 − k < y < 2 − k + 2 1 − k.

It is easy to verify that for k ∈ [0, 1],


√ √
2 − k − 2 1 − k ≤ 1 ≤ 2 − k + 2 1 − k.

We combine this restriction with y > b > 1 (by assumption) to get



b < y < 2 − k + 2 1 − k. (1)

The second condition that has to hold is uB (x) > uB (y), equivalent to
tB − x < y − tB , or

y > 2b − x. (2)

14
In total, we may characterize ∈ [0, 1],
p the pairs as correspondences of k √
X(k), Y (k). This means 2 X(k)−X(k) = k, or X(k) := 2−k −2 1 − k.
With this, we re-write the condition in (2) as

y > 2b − X(k) = 2b − 2 + k + 2 1 − k. (3)

Therefore, all y-proposals√that satisfy our conditions


√ are Y (k) := {y ≥ 0 :
y > b, y > 2b − 2 + k + 2 1 − k, y < 2 − k + 2 1 − k}. To sum up, the
pairs of proposals are defined as x = X(k), y ∈ Y (k), where k ∈ [0, 1].

3. Since X(k) is defined over an entire interval k ∈ [0, 1], a necessary condi-
tion for the existence is just existence of any k such that Y (k) 6= ∅. It
amounts to ensuring:

√ √
∃k ∈ [0, 1] : 2b − 2 + k + 2 1 − k < 2 − k + 2 1 − k

∃k ∈ [0, 1] : b < 2 − k + 2 1 − k

The former holds when b < 2, and the latter when b ∈ (0, 4), hence the
condition writes b < 2. In such a case, we can always find a sufficiently
large k ∈ (0, 1) giving us non-empty Y (k).

2.2 Condorcet winner


Consider individuals A, B, C and eight alternatives a, b, . . . , h. The preference
orderings are as follows:

1st 2nd 3rd 4th 5th 6th 7th 8th


A a d b f c e g h
B c e a h b g d f
C b g c h a d f e

1. Is there a Condorcet winner? Explain.

2. If we eliminate two proposals, can we get a Condorcet winner? If so, which


two proposals?

SOLUTION

1. We begin by constructing all pairwise voting outcomes. For n proposals,


there are n(n−1)
2 pairwise votes, with winning proposal in the following
table. There is no proposal beating all other proposals, hence no Con-
dorcet winner. To see why, notice that there are two subsets of proposals,
H = {a, b, c} and L = {d, e, f, g, h}. Any proposal in H wins over any

15
proposal in L. Within the subsets, there is neither a Condorcet winner,
nor a Condorcet loser (a proposal losing in all pairwise contests). Thus,
the structure of preferences is such that H constitutes a top cycle and L
a bottom cycle. The existence of a top cycle with more than just a single
element implies non-existence of Condorcet winner.

a b c d e f g h
a a c a a a a a
b b b b b b b
c c c c c c
d d d g h
e f e e
f g h
g g

2. For Condorcet winner to exist, we require the top cycle to be a single-


ton. Hence, if we have to eliminate exactly two proposals, we have two
options: (i) Either we eliminate two proposals from the top cycle (3 com-
binations) or (ii) we eliminate exactly one element from the top cycle and
one element from the bottom cycle (15 combinations). In total, we have
18 combinations.

2.3 Euclidean preferences


Prove analytically that contract curves for Euclidean preferences are linear.
Use that on a contract curve, we cannot increase utility of one agent without
decreasing utility of the other agent.

SOLUTION Consider two agents with bliss points x ∈ Rn , z ∈ Rn and


utility functions u1 (y) =pH(X) and u2 (y) = G(Z), where X denotes Euclidean
distance of (y, x), X =p (y1 − x1 )2 + . . . + (yn − xn )2 , and Z is for Euclidean
distance of (y, z), Z = (y1 − x1 )2 + . . . + (yn − xn )2 .
On a contract curve, y = arg max u1 |u2 =ū=const . To characterize the con-
tract curve, we therefore construct a Lagrangian L(y, λ) = u1 + λ(ū − u2 ). The
first-order conditions write for any i ∈ 1, . . . , n

∂L H 0 (X) λG0 (Y )
= (yi − xi ) − (yi − zi ).
∂yi X Y
Thus, for any i, j ∈ 1, . . . , n, we obtain
yi − xi yj − xj
= .
y i − zi y j − zj
By rearranging, we obtain a linear relationship between yi and yj :

yi (xj − zj ) − yj (xi − zi ) + xi zj − xj zi = 0

16
2.4 Dominant point
Suppose a dominant point D exists in two-dimensional space. Consider qualified
majority voting, with quota 12 < m < 1. Which policies cannot be outvoted?

SOLUTION This must be a non-empty neighborhood of D. To see whether


a point A is in the neighborhood, we would construct an arbitrary line passing
through D, and a parallel line passing through A. If the latter line separated
the set of bliss points such that cardinality of a smaller subset would be less
than 1 − m (or cardinality of the larger subset would exceed m), then A would
not be in the neighborhood. This requires A to be sufficiently close to D. With
increasing m, the condition of large asymmetry of subsets is less likely violated,
hence the neighborhood is larger.

3 Collective choice: Majority voting


3.1 3-person committee
You are a member of a 3-person committee which firstly votes to get a complete
proposal and then compares the proposal with the status quo. We have 2
proposals, A and B, of which you prefer A to B but the others prefer B to A.
What kind of proposal C you have to propose so that A is selected, if

a) A is status quo and the others vote sincerely,


b) B is status quo and the others vote sincerely,
c) A is status quo and the others vote strategically,
d) B is status quo and the others vote strategically?

SOLUTION It is useful to use X i Y for player i preferring X to Y, and


X C Y for committee voting for X instead of Y. Suppose you are player 1,
hence

A 1 B, B 2 A, B 3 A.
First and foremost, we use that in Stage 2, everyone votes according to his
or her true preferences, regardless of sincere or strategic voting (by backward
induction, there is no possibility of strategic voting for a worse alternative in
the final stage).

Case a) For A to win overall, a proposal C must win in pairwise vote with B
in Stage 1 (C C B), and then it must lose in pairwise vote with A in Stage
2 (A C C). The loss of C in Stage 2 implies that at least 2 players actually
prefer A to C. That can be either
i) both opponents 2 and 3,

17
ii) and/or you and one of the opponents (without loss of generality player 2).
It is straightforward to reckon that i) is impossible. By contradiction: if this
is so, then for both opponents B i A i C, and transitivity of their preferences
implies B i C. Irrespective of sincere or strategic voting, players 2 and 3 would
in Stage 1 vote for B, which means that A would lose in Stage 2.
Continue with ii): player 2 supports A to C, hence B 2 A 2 C. As regards
Stage 1, we need that B loses with C. Since for sincere voting, player 2 votes
for B in Stage 1, player 3 must vote for C, and C 3 B 3 A.
The final thing is to determine your preferences. In Stage 2, you always
vote sincerely, hence A 1 C. This however doesn’t restrict your preferences to
B 1 C or C 1 B.
Put intuitively: What happens is that you try to pit player 2 and 3 against
each other. You are a partner of player 3 in Stage 1 against player 2, but then
you become a partner of player 2 to vote against player 3. To summarize, if
players 2 and 3 vote sincerely, you win if you propose an amendment C where

A 1 B 1 C, B 2 A 2 C, C 3 B 3 A, or
A 1 C 1 B, B 2 A 2 C, C 3 B 3 A

Case b) For A to win, A must be voted against B in Stage 2. This is however


the final stage where each votes according to his or her preferences, hence B C
A. A cannot win.
(If our task were just to beat B, then we can make C win for preferences
A 1 C 1 B, B 2 A 2 C, C 3 B 3 A by strategically voting for C in
Stage 1.)

Case c) We proceed in the same way like in a), and examine ii). Again, we
need one supporter in Stage 2 (suppose again player 2), hence B 2 A 2 C.
This player in Stage 1 always votes for B to C. The logic is that he can calculate
consequence of his vote:

• vote for B: he effectively votes for B (B C A in Stage 2)


• vote for C: he effectively votes for A (A C C in Stage 2)

Therefore, player 2 votes for B and we need that player 3 supports C to B in


Stage 1. Regardless of his preferences on C, he can also calculate consequences
of his vote:

• vote for B: he effectively votes for B (B C A in Stage 2)


• vote for C: he effectively votes for A (A C C in Stage 2)

The reasoning is identical like for player 2, so he will support B, even if we


have C 3 B. As a result, A cannot win.

18
Case d) The same logic like in b) applies. A cannot win.

3.2 Ancient letter


Frequently cited is a ancient-Roman letter from Pliny the Younger to Titus
Aristo asking for reassurance on a matter that arose during his chairmanship
of the Senate. Consul Afranius Dexter was found dead, and it was not clear
whether he had committed suicide, had ordered his servants to kill him, or
whether they had killed him out of malice. Three possibilities were suggested:
they be acquitted (A), they be banished (B), or they be executed (E). In those
days, questions were resolved by a literal division of the house. That is, those
who agreed with a motion sat with the person who made the motion, and those
who disagreed sat on the other side of the room.

a) You don’t know opinions of the others, but want the servants to be ban-
ished. How do you order sequence of votes? Explain in detail.

SOLUTION We will list below all possible procedures applied in all possible
house pairwise votes. There are 3 procedures: i) vote A and E, then winner
with B; ii) vote A and B, then winner with E; iii) vote B and E, then winner
with A. Pairwise votes are in rows of the table. In cells, the first item is winner
in Stage 1, and the second item is winner in Stage 2, i.e. the overall winner.
Bold are the cases without Condorcet winner.

i) A to E ii) A to B iii) B to E
AE AB EB A A A A E A
AE AB BE A A A A B A
AE BA EB A B B E E A
AE BA BE A B B B B B
EA AB EB E E A E E E
EA AB BE E B A E B A
EA BA EB E E B E E E
EA BA BE E B B B B B

In the table, we use that members of the house don’t know about preferences
of the others, so vote sincerely. It is immediately seen that for cases with
Condorcet winner, it is irrelevant which order of voting is used: Condorcet
winner is always selected. However, for cases without Condorcet winner, the
winner is always the alternative not voted in Stage 1.
As a result, we recommend Pliny the Younger to use procedure A to E
(denoted i)) so as to maximize chance of servants being banished (B).

19
3.3 Median voter
Identify median voter in your country of origin. Use publicly available statistics,
justify your selection of data and discuss whether single-peakedness may hold
in the criterion that you picked up.

SOLUTION Use available socioeconomic characteristics that determine pref-


erence for general redistribution.

3.4 Referendum test


In McEachern’s test, think about what would happen if DSQ > Dm and GM
(greater than majority referendum) would be applied instead of normal (simple
majority) referendum. What signs of coefficients (positive or negative) should
we expect?

SOLUTION Discussed in the lecture: the sign of GMi shall be opposite.

3.5 Amendments
There are 3 policies: status quo (SQ), original bill (B), and amendment (A).
The Congress controlled by Democrats has preferences B  A  SQ and the
Republican President has preferences SQ  A  B. The order of voting is:
1. The Congress must prepare a final bill. It may or may not propose amend-
ment to the original bill.
2. The President may apply veto on the final bill.
Find equilibria in the following cases:
1. Presidential veto cannot be overthrown and the amendment can be pro-
posed only by the Congress.
2. Presidential veto can be overthrown and the amendment can be proposed
only by the Congress.
3. Presidential veto cannot be overthrown and the amendment can be pro-
posed both by the President and the Congress.
4. Presidential veto can be overthrown and the amendment can be proposed
both by the President and the Congress.

SOLUTION We construct extensive games, apply backward induction (best


responses highlighted by solid lines) and identify equilibria. In Cases 1 and 3
(effective veto), President gets its first best, SQ. This is obvious, because veto
always leads to his or her first-best, and veto is always possible. In Case 2,
Congress gets its best, because President is powerless. In Case 3, we have an
intermediate case when the President can use agenda-setting power to avoid the
original bill, hence a compromise (amended bill) takes place.

20
Figure 5: President vs Congress

3.6 4 proposals in a 3-person committee


Suppose you are one of three members of a committee that must choose an
outcome from among A, B, C, D. The preference of the members of the com-
mittee are A  B  C  D for you, D  C  A  B for Member 2, and
C  B  D  A for Member 3.
What are the outcomes of the following agendas if everyone is strategic?

1. B versus C, the winner against A, the winner against D

2. A versus C, the winner against D, the winner against B

3. B versus D, the winner against C, the winner against A

If it were up to you, which agenda would you choose?

21
SOLUTION These are extensive games, that can be solved by backward in-
duction. To do that quickly, first find the outcomes of sincere majority pairwise
voting:

B C D
A A C D
B - C B
C - - C

Agenda 1 We may use the following table:

nominal pair real pair outcome


Stage 3
DA DA D
DB DB B
DC DC C
Stage 2
BA BD B
CA CD C
Stage 1
BC BC C

The table is constructed such that it solves the game from backwards. In
Stage 3, it finds majority voting outcomes for all possible pairs. These outcomes
are used in Stage 2; here, to vote nominally in favor of A means to vote really
for D (highlighted in bold). The outcomes are used for Stage 1. Finally, we
can see that C wins (no wonder, because C is Condorcet winnner).

Agenda 2 We again use the table to derive that Condorcet winner wins.

nominal pair real pair outcome


Stage 3
AB AB A
CB CB C
DB DB B
Stage 2
AD AB A
CD CB C
Stage 1
AC AC C

22
Agenda 3 Also here the Condorcet winner wins.

nominal pair real pair outcome


Stage 3
BA BA A
CA CA C
DA DA D
Stage 2
BC AC C
AC DC C
Stage 1
BD CC C

To conclude, if all agents are strategic, it is irrelevant which agenda is used;


the outcome is always identical, and it is a Condorcet winner. This is a clear
property of the fact that all proposals are voted throughout the game, so every-
one anticipates Condorcet winner to pass. The role of agenda-setting for strate-
gic voters—at least in these simple settings with pairwise majority voting—is
restricted only to non-existence of Condorcet winner.

3.7 Strategic voting


We have three committee members and three proposals. One proposal is Con-
dorcet winner. Prove that for any complete agenda (any sequence of pairwise
voting including all proposals), Condorcet winner is in Nash equilibrium.

SOLUTION Without loss of generality, denote proposals voted in Stage 1


{a, b}, and the remaining proposal c. Hence, Stage 2 is either vote over {a, c}
or {b, c}.

• C.w. is c: Always voted in Stage 2, since in the last stage, each member
votes sincerely.

• C.w. is a: Proposal a must be preferred over b at least by 2 members, and


over c at least by 2 members. There are only two possibilities: (i) The
two members are the same. Then, c is first-best alternative for both, and
the equilibrium is that they vote for their first-best alternative in Stage
1 and then in Stage 2. (ii) The two members are different. Let the first
pair be members {1, 2} and the second pair be members {2, 3}. Hence,
the preferences for members {1, 3} must be b 1 a 1 c and c 3 a 3 b;
for member 2, a 2 b, a 2 c.
Thus, voting for a in Stage 1 is to vote effectively for C.w. in Stage 2,
and voting for b in Stage 1 is to vote for the second-best alternative of
member 2 (b if a 2 b 2 c, and c if a 2 c 2 b). In any case, this is a

23
pairwise vote of a C.w. and an alternative proposal, and in such pairwise
vote Condorcet winner must win (by definition of C.w.).

• C.w. is a: This is just an identical problem to the previous one (C.w. again
voted in Stage 1, and majority of members agree on passing it to Stage
2).

To provide intuition even more generally: To misrepresent one’s preferences


with respect to the Condorcet winner (i.e., not support it and eliminate through-
out the agenda) could be a best response only if it implies an alternative proposal
to be passed. However, in any pairwise vote over C.w. and this alternative of
course the majority supports Condorcet winner.

3.8 Strategic voting: The general result


Consider a set of policies with one policy being a Condorcet winner. The policy-
makers are strategic and non-cooperative. Voting is such that there is that
there is a full ordering of policies which determines the order of how policies are
sequentially voted in pairwise votes. In each vote, the loser is outvoted and the
winner passes to another round. Is it possible that the equilibrium outcome is
not a Condorcet winner? Discuss formally.

SOLUTION The answer is surprisingly straightforward. There is a stage t


where Condorcet winner C is proposed and voted against alternative proposal A.
By voting, the policy-makers select a subgame, where we have structurally only
two different types: A-subgame (C eliminated), and C-subgame (A eliminated).
Each subgame has an equilibrium outcome proposal, to be denoted A∗ and C ∗ .
Clearly, A∗ 6= C and C ∗ 6= A.
In stage t, A-subgame is selected if and only if A∗ C C ∗ . (Recall that
policy-makers vote on the basis of the anticipated consequences, independently
of the content of currently given proposals). Since A∗ 6= C, this requires C ∗ 6= C.
(Otherwise, C ∗ = C C A∗ by the definition of Condorcet winner, and C-
subgame is voter.) Thus, a subgame where C is allowed to pass must end up
with a result that is not Condorcet winner.
But if C is passed, we would be in stage t + 1, where the problem would just
replicate (C facing an alternative B), and we would again require C ∗ 6= C. This
ends up in the last stage, where it must be that C ∗ = C, hence it is impossible to
maintain C ∗ 6= C. Therefore, once C appears on the ballot, it is never outvoted.
In other words, each subgame containing C leads to C, and this is true also for
the entire game (= improper subgame).

3.9 Two-party electoral competition


Suppose that preferred taxes are tM < t∗R < t∗L for median voter (M), right-
wing (R) and left-wing party (L). The parties engage in simultaneous electoral
competition with binding platforms.

24
a) What electoral platforms will R and L set under deterministic voting?

b) What under stochastic voting?

SOLUTION We have to distinguish between deterministic and stochastic


voting.

Deterministic voting This is simple. Suppose tR = t∗R . The best response


of L is the best of the three alternatives:

• loss, tL > t∗R : pL = 0, t = tR = t∗R

• tie, tL = t∗R : pL = 21 , t = tR = t∗R

• win, tM ≤ tL < t∗R : pL = 1, t = tL

We have policy-seeking parties, hence

UL (t∗R ) > UL (tL < t∗R ).


This means that L is willing to select loss or tie. The best response of R to
this choice is tR = t∗R (bliss point), so we have equilibrium

tR = t∗R ≤ tL .

Stochastic voting We use the first-order conditions imposed on expected


utilities EUR and EUL to be equal zero, as derived in the lecture:

dpL dUL (tL )


[UL (tL ) − UL (tR )] = −pL
dtL dtL
Step 1. We can obviously discuss only cases tM ≤ tL ≤ t∗L , and tL ≥ tR :

dpL dUL (tL )


[UL (tL ) − UL (tR )] = −pL
dtL | {z } |{z} dtL
|{z} ≥0 <0 | {z }
≤0 ≥0

It is easy to find that this is satisfied such that 0 = 0 only if tR = tL = t∗L .


Here, R would obviously decrease tR (increases both pR = 1 − pL and UR (tR )),
so it cannot be an equilibrium. Therefore, in equilibrium, both terms must be
strictly negative. This implies tR < tL < t∗L .
Step 2. Now focus upon R. We discuss only cases tM ≤ tR ≤ t∗R . Then:

dpR dUR (tR )


[UR (tR ) − UR (tL )] = −pR
dtR | {z } |{z} dtR
|{z} ≥0 <0 | {z }
≤0 ≥0

We use tR < tL . Then, the term would be satisfied with 0 = 0 only if


tR = tM = t∗R , which violates assumptions. Therefore, in equilibrium, both

25
terms must be strictly negative. This implies tM < tR < t∗R . Finally, UR (tR ) >
UR (tL ) implies that tR < t∗R < tL . Overall,

tM < tR < t∗R < tL < t∗L .


We observe incomplete convergence to the median platform. Unlike in per-
fect voting, R has to make a concession, and L is not willing to converge to the
bliss point of R.

3.10 Redistribution
Consider our example of redistribution with distortion, but suppose that the
subsidy is paid only after the person ends working (i.e. it is a pension provided
by the government) and people are of different age. Assume that an individual
works wi -time, where wi ∈ [0, 1], and earns pre-tax income yi ∈ [0, 1], which is
taxed by flat tax t ≥ 0; the he/she retires and receives pension s (see lecture
notes for the definition of s). Assume that the length of retirement is 1, and
(wi , yi ) is uniformly distributed on [0, 1] × [0, 1].
a) Derive individually optimal ti as a function ti (yi , wi ).
b) Derive density function of ti over t ∈ [0, 1].
c) Identify individual/s with median value of ti (to be denoted as tM ).
d) Is tM a Condorcet winner or not?

SOLUTION The pension is paid out of tax revenues, which is equal to

Z 1 Z 1 Z 1  1 Z 1  1
y2 1 1 w2 1
wy
¯ = wy dy dw = w dw = w dw = = .
0 0 0 2 0 2 0 2 2 0 4
Retirement lasts single period, hence individual consumption writes

t(1 − λt)
ci = wi [yi (1 − t)] + (1 − λt)(twy)
¯ = wi yi (1 − t) + .
| {z } | {z } 4
income per period pension per period
By the first order condition,
∂ci 1 1
= −wi yi + − 2λ t = 0
∂t 4 4
1 − 4wi yi
ti =

From above, two individuals i, j prefer identical tax, if their lifetime (factor)
income Y = wy is identical, Yi ≡ wi yi = wj yj = Yj . We have therefore
ti = ti (Yi ), or by inverse

26
1 − 2λti
Yi (ti ) = .
4
We can define two distribution functions. Let F (t) be the share of individuals
whose preferred tax is less than t, ti ≤ t, and let G(Y ) be the share of individuals
whose lifetime income is less then Y , Yi ≡ wi yi ≤ Y . By equation above, we
have ∀ti ≥ t : Yi ≤ Y , hence

G(Y ) = 1 − F (t(Y )),


or alternatively

G(Y (t)) = 1 − F (t).


The following figure illustrates. For any lifetime income Y , the critical indi-
viduals for whom wi yi = Y are located on the hyperbole. For these individuals,
we can define their optimal t, satisfying Y = Y (t), i.e. t = (1 − 4Y )/(2λ).

Figure 6: Pensions

Now, the share of individuals whose lifetime income is less then Y , G(Y ),
is defined by the density of yi wi below the hyperbole. Since we have uniform
distribution of yi and wi , the share is defined only by the size of the area below
the hyperbole.

Z 1 Z 1
Y 1
G(Y ) = 1 − Y − w(y) dy = 1 − Y − dy = 1 − Y − Y [ln y]Y =
Y Y y

= 1 − Y + Y ln Y = 1 + Y (ln Y − 1)
The other distribution function, F (t) or F (t(Y )), is the complementary area
above the hyperbole. Density function f (t) is obtained by making the first
derivative on F (t):

27
∂F (t) ∂G(Y ) ∂Y ∂Y λ 1 − 2λt
f (t) = =− = ln Y = − ln
∂t ∂Y ∂t ∂t 2 4
The median tax is defined as F (tM ) = 12 , or G(Y (tM )) = 1 − 1
2 = 12 .
1
1 + Y (tM )(ln Y (tM ) − 1) =
2
Which gives implicit solution (using 1 = ln e):
4e
(1 − 2λtM ) ln −2=0
1 − 2λtM
We can easily prove that consumption function ci (t) is quasiconcave in single
policy dimension t (ti is a unique local maximum for this function, and the
second derivative is always negative on t ∈ [0, 1]). With quasiconcave preferences
on single dimension, tM is a Condorcet winner.

3.11 Redistribution: advanced (Hsu & Yang 2007)


In public sector economics, an important concept is the marginal cost of
public funds (MCF). It defines the cost of raising an extra unit of tax revenues.
In economy where taxation implies no deadweight loss, it is easy to see that
M CF = 1; to get an extra dollar of tax revenue means a dollar less of after-tax
income. In economy with distorting taxation, M CF ≥ 1.
To see that, denote t ∈ [0, 1] a flat tax rate imposed on yi , a pre-tax income
of an individual i = 1, . . . , N . Denote total tax revenues T (t), average income
ȳ, and let L(t) be the total fall in after-tax incomes. Marginal cost of public
funds is defined as
dL
M CF (t) ≡ .
dT
With non-distorting taxation,
X
T (t) = tyi = tnȳ,
i
X
L(t) = tyi = tnȳ.
i

You can directly apply that T (t) = L(t), hence M CF (t) = 1. To identify
MCF in the general case, it is nevertheless better to write
dL
dL dt
M CF (t) = = dT
.
dT dt

a) Derive T (t) and L(t) for the economy with distorting taxation that was
introduced in the lecture ‘Majority’.

28
b) Derive M CF (t) as a function of tax rate t. Is it increasing, constant, or
decreasing?

c) Find M CF (t) that is in the majority voting equilibrium.

d) To measure deadweight loss (and MCF) can be very difficult. Can an


economist identify MCF in the economy only by studying pre-tax income
distribution? How? (This question is motivated by Hsu & Yang 2007
paper in Economic Inquiry.)

SOLUTION In the lecture, we have assumed that 1 − λt share of the tax


base disappears, so the total tax revenues are T (t) = tnȳ(1 − λt). The difference
between pre-tax and after-tax income (i.e., not yet accounting for a subsidy) is
tyi for each individual, which in total gives L(t) = tnȳ.
To derive M CF (t), derive dL/dt = nȳ and dT /dt = nȳ(1 − 2λt).
dL
dt 1
M CF (t) = dT
=
dt
1 − 2λt
dM CF (t) 2λ
= >0
dt 1 − 2λt
(Be careful, you cannot divide L/T to get 1/(1 − 2λt), and then argue
M CF (t) = d1/d(1 − λt) = 0. You can’t either argue that
1
d 1−λt λ
M CF (t) = = .
dt (1 − λt)2

All of that would obviously be incorrect.)

We know from the lecture that in the majority voting equilibrium, the tax
is the median voter’s preferred tax, which writes (recall again the lecture)
ȳ − yM
tM = .
2λȳ

Thus, MCF in the equilibrium is M CF (tM ) = 1/(1 − 2λtM ) = ȳ/yM > 1.


To conclude: If an economist can observe only the distribution of pre-tax (wage)
incomes yi (e.g., from household income statistics), and believes that in political
equilibrium, median voter is decisive, that he or she can argue that the marginal
cost of public funds is simply the ratio of mean to median pre-tax income.

4 MCF: extension
Following the previous example, suppose M CF (t) = 1 + λt. What are the loss
and revenue functions, L(t), T (t)? Prove that for any t > 0, T (t) < tnȳ.

29
SOLUTION In our specification, the deadweight loss of taxation is for sim-
plicity modeled as the loss on the part of policy maker (e.g. administrative costs),
not the loss affecting pre-tax incomes (e.g., distortions). With unchanged pre-
tax incomes, the loss function is, by definition, L(t) = tnȳ.
By definition of the MCF,
dL
dt nȳ
1 + λt = M CF (t) = dT
= dT
.
dt dt

dT (t) nȳ
By integrating dt = 1+λt , we obtain, using normalization T (0) = 0,
nȳ
log(1 + λt).
T (t) =
λ
The inequality T (t) < tnȳ is equivalent to λt > log(1 + λt), and using
substitution x = λt, f (x) = ex − x − 1 > 0 if x > 0. To see that it holds, notice
that f (0) = 0 and f 0 (x) > 0 for x > 0.

5 Public spending
5.1 Strategic deficit I
Suppose that politics is a conflict of two representative consumers, right-wing R
and left-wing L. Each has identical endowment m  0, pays head tax τ ∈ [0, m],
consumes private goods in amount m − τ and a public good provided by the
government in amount g ≥ 0. The utility functions in period t are
gt
uL,t = + k ln(m − τt )
2
gt
+ ln(m − τt )
uR,t =
2
P P
where 0 < k < 1. Lifetime utilities are UL = t uL,t and UR = t uR,t .

a) If government budget must be balanced in each period (2τt = gt ), what


are the optimal amounts of public good for R and L?

b) Suppose we have two periods, t = 1, 2. In period t = 1, R controls the


government budget (sets g1 and τ1 ). In period t = 2, L controls the budget
(sets g2 and τ2 ). The budget must be balanced at the end of the second
period, 2(τ1 + τ2 ) = g1 + g2 , but not necessarily at the end of the first
period. What taxes t1 , t2 will be imposed, and what is the deficit?

c) In two-period setting, let L control the budget in t = 1 and R control


the budget in t = 2. What taxes t1 , t2 will be imposed, and what is the
deficit?

For b) and c), analyze only interior solutions (i.e. with gt > 0).

30
SOLUTION
P
a) For 2τt = gP
t , we can write lifetime utilities as UL = t [τt + k ln(m − τt )]
and UR = t [τt + ln(m − τt )]. From the first-order condition, this yields
for each period identical tax in optimum, τL∗ = m − k and τR∗ = m − 1.
Hence,
∗ ∗
gL = 2(m − k) > 2(m − 1) = gR .

b) R plays in period 1 and L plays in period 2.

1. First, examine what L will do in period 2. L inherits deficit b ≡


g1 − 2τ1 , so he/she is restricted by necessity to balance the budget,
g2 = 2τ2 − b. L sets g2 , τ2 to maximize utility in period 2, uL,2 :

 
g2 g2 g2 b
(g2 , τ2 ) = arg max +k ln [m − τ2 ] = arg max +k ln m − −
2 2 2 2

This yields g2 = 2m − 2k − b. (Since we consider only interior solu-


tions, we have to impose b < 2m − 2k.)
2. Second, examine what R will do in period 1, anticipating g2 = 2m −
2k − b. R maximizes lifetime utility UR :
g1 g2
(g1 , τ1 ) = arg max + ln [m − τ1 ] + + ln [m − τ2 ]
2 2
g1 b g2
Imposing g2 = 2m − 2k − b, τ1 = 2 − 2 and t2 = 2 + 2b , we have

 
g1 − b g1 − b
UR = +ln m − +m−k+ln k = τ1 +ln(m−τ1 )+m−k+ln k.
2 2

This yields unique τ1 = m − 1. The pair (g1 , b) is not uniquely


characterized due to quasilinearity of the utility function; with τ1 as
above, we only have g1 = 2τ1 + b = 2m − 2 + b.
3. As a final step, derive τ2 as a function of b. We know τ2 = g22 + 2b =
2m−2k−b+b
2 = m − k. The solution can be characterized for any
b < 2m − 2k as follows:

τ1 = m − 1 = τR∗ g1 = 2m − 2 + b
τ2 = m − k = τL∗ g2 = 2m − 2k − b

If, for some reason, R wants to set (g1 , b) so as to maximize utility


in the period that he/she has in control (i.e. period 1; notice that
the lifetime utility UL will be constant if g1 and b satisfy conditions
above), then we have:

31
g1 g1 g1
(g1 , b) = arg max + ln [m − τ1 ] = arg max + ln 1 = arg max
2 2 2
This implies maximal g1 and maximal deficit b, but since we need also
non-negative spending in period 2 to have interior solution (g2 > 0),
our maximal deficit is b = 2m − 2k − ε, where ε > 0 is very small.
We have g1 = b + 2τ1 = 2m − 2k + 2m − 2 − ε = 4m − 2k − 2 − ε.
Then, the particular solution with strategic deficit is:

τ1 = m − 1 = τR∗ g1 = 4m − 2k − 2 − ε
τ2 = m − k = τL∗ g2 = ε

c) R plays in period 2 and L plays in period 1. You proceed by analogy and


get

τ1 = m − k = τL∗ g1 = 2m − 2k + b
τ2 = m − 1 = τR∗ g2 = 2m − 2 − b

In both cases, both L and R manage to get in the period when they rule the
optimal taxation, where individual marginal cost of public spending equals indi-
vidual marginal benefit of public spending. The deficit not necessarily emerges.

5.2 Strategic deficit II


We have 2 periods with discount factor equal 1 (i.e., zero interest rate). In
period 1, a left-wing party is in power. In period 2, a right-wing party is in
power. In any period, preferences of both parties over spending x and tax t
are u(x, t) = bi x − t2 /2, where the only difference is bL > bR . Budget must be
balanced after two periods, x1 + x2 = t1 + t2 .

a) Is it optimal for a left-wing government to create a deficit in period 1?


If so, is spending x1 higher or lower than in a case when deficit is not
possible at all?

SOLUTION Start with the case of zero deficit. Then, x1 = t1 and x2 = t2 .


In period 2, R-party selects xR = tR = arg max{bR x − x2 /2} = bR . In period
1, L-party selects xL = tL = arg max{bL x − x2 /2} = bL .
With deficit d := x1 − t1 , R-party in period 2 optimizes as follows:

x2 = arg max{bR x2 − (x2 + d)2 /2} = bR − d


We may also re-write into t2 = x2 + d = bR . This says that R-party keeps
tax t2 at a constant level (this is property of quasilinear utility as we have it
here.)

32
Since this is an extensive game, where L-party plays first, L-party anticipates
the best response of R-party. Thus, L-party optimizes over both periods, where
using discount factor equal,

(x1 , d) = arg max{bL x1 − (x1 − d)2 /2 + bL x2 − (x2 + d)2 /2}


= arg max{bL x1 − (x1 − d)2 /2 + bL (bR − d) − b2R /2}.

Maximizing over both variables yields an identical condition, x1 − d = bL .


This states that L-party keeps tax t1 = x1 − d at a constant level bL . Otherwise,
L-party is indifferent over the size of the deficit.
So, with this specification of utility, we observe strategic effect only insofar
as each party minimizes deviation from the party-optimal tax. Control over tax
in each period has only the party that rules that period, and all equilibrium are
characterized simply by t1 = bL and t2 = bR .
To answer the question: L-party may choose spending x1 below bL , but
then—to keep tax constant— she sets a negative deficit (i.e., makes a budgetary
surplus). If she instead chooses x1 > bL , then she sets a positive deficit.

5.3 Spending cap given by losers


In the lecture, we discussed the possibility to agree on a spending cap T as
a remedy to overspending (tragedy of budgetary commons). Suppose current
losers are agenda-setters, and current winner only vetoes their take-or-leave
proposal.

a) For what parameters (n, δ) is the equilibrium cap socially optimal?

SOLUTION The losers’ optimal cap is Tk = δ 2 < 1. Losers try to get the
cap as close to this level as possible. The problem is that the winner can veto
their proposal.
The game is a simple extensive game: (i) Loser propose a cap. (ii) Winner
agrees, or vetoes. In the case of veto, there is no cap. We solve the game
by backward solution. That means, we decide when the winner is willing to
approve a cap. Approval depends on whether the winner gets more or less than
if vetoing the cap.
Winner’s utility in the case of any cap T is Wj (T ) (recall lecture), and is
increasing if T < Tj and decreasing if T > Tj , where we know 1 < Tj < n2 .
√ T δ  √ 
Wj (T ) = 2 T − + 2 T −T .
n (1 − δ)n

In the case of veto (no cap), the utility is as-if a cap were set at T = n2 ,
where each winner in any period sets x = T = n2 , so the utility is

2δn + n2 (1 − 2δ)
Wj (n2 ) = .
(1 − δ)n

33
We also evaluate winner’s utility for the socially-optimal cap, T c = 1:

n + (n − 1)(1 − 2δ)
Wj (1) = .
(1 − δ)n

Now, losers want to get approval for the cap that is as close as possible to
Tk < 1. Since no-cap is strictly worse for the losers, they will always propose a
cap that is acceptable to the winner. The equilibrium cap is the lowest possible
cap where the winner is still (at least weakly) better off than having no cap at
all. Thus, the winner’ utility under the equilibrium cap must be exactly equal
to the utility under no cap.
If the equilibrium cap should be a socially optimal cap 1, then we simply
require Wj (n2 ) = Wj (1). By comparing,

2δn + n2 (1 − 2δ) = n + (n − 1)(1 − 2δ),

(1 − 2δ)(n − 1)2 = 0.
This holds always for n = 1 (which obviously violates assumption n > 1)
and for δ = 1/2, irrespective of the number of players.

5.4 Coalitional vs single-party governments


Suppose that structure of tax base is such that each of n groups in a society
pays equal share of total public spending.
If a government is coalitional, assume that each of n groups proposes xi
that is collective benefit for this group. Denote the optimal spending xC i . In
a single-party government, only the winner group gets xj , the others get zero.
Denote the optimal spending of the winner xSj .
Suppose utility function ui = ui (xi , ti ) satisfies the following standard as-
sumptions:

∂ui ∂ 2 ui
> 0, ≤0
∂xi ∂x2i
∂ui ∂ 2 ui
< 0, ≤0
∂ti ∂t2i

a) Is it possible that xC S
j > xj , i.e. the winner in the single-party government
spends less than if he were just a member of the coalitional government?
P
SOLUTION Equal share of total public spending means ti = j xj /n. Now,
we use that in both types of government, each group that is in power determines
her own spending. Thus, xi = arg max ui . In the optimum, the total differential
is zero,
∂ui ∂ui
dui = dxi + dti = 0
∂xi ∂ti

34
P
From ti = j xj /n, we have dti = dxi /n. Thus, the differential rewrites
dui ∂ui 1 ∂ui
= + = 0.
dxi ∂xi n ∂ti
Now, under P coalitional government, we know that all coalitional parties
spend, so tC
i = j xj /n > xi /n. Thus, due to non-decreasing marginal cost,
we have that the marginal cost of a unit of collective benefit is relatively
largerPin a coalitional government that in a single-party government, where
tSi = j xj /n = xi /n. For some identical value xi = x̄,
∂uSi ∂uC
i
0> (x̄, tSi ) ≥ (x̄, tC
i ).
∂ti ∂ti
Now, inserting into the implicit solutions,
∂uC
i 1 ∂ui C C 1 ∂uSi C S ∂uSi S S
(xC C
i , ti ) = − (xi , ti ) ≥ − (xi , ti ) = (x , t ).
∂xi n ∂ti n ∂ti ∂xi i i
To conclude, from non-increasing marginal benefit, we have
∂uC
i ∂uSi S S
(xC , t C
) ≥ (x , t ) =⇒ xC S
i ≤ xi .
∂xi i i ∂xi i i

5.5 Coordinated budgeting


Assume three parties, A, B and C, and two types of public expenditures, x and
y. Parties have the following preferences over budgets of total size B = x + y:
uA = −(2 − x)2 − (2 − y)2
uB = −(3 − x)2 − (1 − y)2
uC = −(xc − x)2 − (yc − y)2
a) For which xc ≥ 0, yc ≥ 0 does coordinated budgeting lead to higher B
than sequential budgeting?

SOLUTION Both for party A and B, the optimal total budget is B = 4.


Therefore, in coordinated budgeting, they will create a majority in the first step
and agree on B = 4, regardless of preferences in C. We want that in sequential
budgeting, B < 4.
This problem can be solved both graphically or analytically. Analytically,
we want x + y < 4. This implies only two cases: (i) C is decisive (median) at
least in either of dimensions, and less than median in the other dimension; (ii)
C is less than median in both dimensions. In (i), consider that C is decisive in
x, and less than median in y; hence, xc ∈ [2, 3) and yc < 1. If C is decisive in
y, we have a mirror-case, xc < 2 and yc ∈ [1, 2). If C is decisive in both x and
y, then xc ∈ [2, 3) and yc ∈ [1, 2), where obviously also xc + yc < 4. In (ii), we
have xc ≤ 2 and yc ≤ 1.
Graphically, this amounts to polygon with coordinates (0, 0)−(3, 0)−(3, 1)−
(2, 2) − (0, 2) − (0, 0).

35
5.6 Pivotal legislator
We have n non-cooperative legislators in the Parliament (n is even number),
each with circular preferences in space x × y. The bliss point of a legislator
i ∈ {1, . . . , n} is (xi , yi ). Suppose a new legislator enters the Parliament, with
bliss point (xn+1 , yn+1 ).

1. Derive all possible (xn+1 , yn+1 ) for which in sequential budgeting, the
equilibrium policy satisfies (x, y) = (xn+1 , yn+1 ).

2. Derive all possible (xn+1 , yn+1 ) for which in coordinated budgeting, the
equilibrium policy satisfies (x, y) = (xn+1 , yn+1 ).

3. Derive (or just plot a graph) all possible (xn+1 , yn+1 ) for which in both
sequential and coordinated budgeting, x 6= xn+1 and y 6= yn+1 .

SOLUTION We know that in each stage, a proposal is equilibrium if it is a


median proposal. Label xi , i ∈ 1, . . . , n + 1 such that x[1] ≤ x[2] ≤ . . . ≤ x[n+1] .
By analogy, introduce y[i] , (x + y)[i] and (x − y)[i] .
Sequential budgeting: The necessary and sufficient condition is that xn+1 =
x[n/2+1] and yn+1 = y[n/2+1] .
Coordinated budgeting: The necessary condition and sufficient is that (xn+1 −
yn+1 ) = (x − y)[n/2+1] and (xn+1 + yn+1 ) = (x + y)[n/2+1] .
To answer the last point: In sequential budgeting, the set Ω where x 6= xn+1
and y 6= yn+1 is characterized by xn+1 < x[n/2+1] or xn+1 > x[n/2+1] and
yn+1 < y[n/2+1] or yn+1 > y[n/2+1] . In coordinated budgeting, we know that
equilibrium (x, y) satisfies (x − y) = A := (x − y)[n/2+1] and (x + y) = B :=
(x + y)[n/2+1] . Alternatively,
 
A+B B−A
(x, y) = , .
2 2

Thus, in coordinated budgeting, the set Θ where x 6= xn+1 and y 6= yn+1 is


characterized by xn+1 < (A + B)/2 or xn+1 > (A + B)/2 and yn+1 < y(B−A)/2
or yn+1 > y(B−A)/2 . Overall, we seek (xn+1 , yn+1 ) ∈ Ω ∩ Θ.

5.7 Order of voting


Suppose three political parties, i = 1, 2, 3, have the following preferences over
tax rate t and public spending g, where t1 ∗ < t2 ∗ < t∗3 and g1 ∗ = g2 ∗ = g3 ∗:

ui = −(t − ti ∗)2 − (g − gi ∗)2


The parties vote in two stages. In each stage, they use majority voting which
ends if there is no proposal able to beat the last agreed proposal. In Stage 2,
public spending g is determined. Is there a difference if there a vote about tax
rate or a vote about deficit in Stage 1?

36
SOLUTION No. In Stage 2, regardless whether we vote on constraint b =
b̄ = g − t or t = t̄ = g − b, the pivotal party is party 2. In Stage 1, the pivotal
party is again party 2. Hence, it proposes either ¯(t) = t2 (voting about tax
rate), or b̄ = b2 = g2 − t2 (voting about deficit). Either is a Condorcet winner in
Round 1. In Stage 2, the pivotal party 2 proposes g = g2 , which is a Condorcet
winner. This logic can be demonstrated by drawing graphs of constraints voted
in Round 1, and the resulting outcomes on these constraints in Round 2, as we
did it in the lecture.

5.8 Coordinated vs. sequential budgeting with compensa-


tions
We have 3 individuals who pay tax t. Tax revenues are used to pay for private
goods g1 , g2 , where balanced budget must be satisfied, 3t = g1 + g2 . Utility
functions are as follows:

u1 = g1 − t

u2 = g2 − t
u3 = −t
We have two types of budgeting. In sequential budgeting, the individuals
use majority voting to determine g1 in Stage 1, and then use majority voting to
determine g2 in Stage 2. In coordinated budgeting, the individuals use majority
voting to determine t in Stage 1, and then use majority voting to determine
allocation of 3t into g1 , g2 in Stage 2. In each stage, costless compensations
between any pair of players are possible.

a) Derive τ ∗ , g1∗ , g2∗ in sequential budgeting.

b) Derive τ ∗ , g1∗ , g2∗ in coordinated budgeting.

c) Compare the results and explain the difference or the absence of difference.

SOLUTION

a) In Stage 2, the individuals determine g2 and t is set residually, as t =


(g1 + g2 )/3. This means that we can alternatively think that individuals
determine t ≥ g1 /3√and g2 is set residually. In the absence of compensa-
tions, we use u2 = 3t − g1 − t and have

∂u1 ∂u1 ∂u2 3


= = −1, = √ − 1.
∂t ∂t ∂t 2( 3t − g1 )

Hence, Individuals 1 and 3 prefer the lowest feasible tax, t = g31 , whereas
g1
Individual 2 prefers tax that satisfies ∂u 3
∂t = 0, i.e. t = 3 + 4 .
2

In the absence of compensations, Individuals 1 and 3 would vote together


for the lowest possible tax t = g31 , where g2 = 0. Each individual would

37
obtain zero incremental utility (surplus) in Stage 2. This gives intuition
that it will be Individual 2 who will be willing to compensate the other
individuals to vote for a higher tax, and correspondingly for a strictly
positive g2 . Denote compensations to Individuals 1 and 3 as c1 , c3 ≥ 0.
Such a compensation vector will be successful if Individuals 1 and 3 cannot
make a counterproposal that would compensate each other to vote back
for t = g31 , where they earn zero. In other words, the joint net surplus of
Individuals 1 and 3 in Stage 2 must not be less than zero in total, formally

√ g1 g1
u1 − g1 + + u3 + + c1 + c3 ≥ 0.
3 3
Individual 2 must respect this constraint, and since c1 +c3 negatively enter
his or her net utility, the constraint will be satisfied with equality:

√ 2g1
c1 + c3 = g1 − − u1 − u3
3
Individual 2 maximizes net surplus in Stage 2, u2 + g31 − c1 − c3 = u2 +

u1 + u3 − g1 + g1 , which is equivalent to maximization of total payoff,
with the first-order condition

∂(u1 + u2 + u2 ) 3
= √ − 3 = 0.
∂t 2( 3t − g1 )
g1 1
Maximum is at t = 3 + 12 , or g2∗ = 14 . The compensations are

√ 2g1 g2 g2 2
c1 + c3 = g1 − − u1 − u3 = + = .
3 3 3 12
1
With symmetry, c1 = c3 = 12 . As a final check, observe that Individual 2
has strictly positive net surplus in Stage 2:

g1 p g∗ 6−3
u2 + − c1 − c3 = g2∗ − 2 − c1 − c3 = >0
3 3 12
In Stage 1, the players are expecting the outcome described above. Hence,
they take g2 as given and face the symmetric problem like in Stage 2, only
Individual 1 will be the one who compensates one of the other individuals
to increase tax (and, correspondingly, g1 ). Hence, g1∗ = 41 . Total tax is
2
t∗ = 12 = 61 . Notice that net payoff of Individual 3 is u3 + 2c3 = 0.
b) In Stage 2, tax revenues are spent, 3t = g1 + g2 . Individual 3 is indifferent
between all the allocations, whereas interests of Individuals 1 and 2 are in
conflict. If it happens that √
Individuals 1 and 2 cooperate with each other,

they maximize joint payoff 3t − g2 + g2 , and the solution is symmetric,
g1 = g2 = 23 t. If Individuals 1 and 3 cooperate with each other, they

maximize joint payoff g1 , and g1 = 3t. By analogy, if Individuals 2

38

and 3 cooperate with each other, they maximize joint payoff g2 , and
g2 = 3t. The maximal joint payoff is, quite paradoxically, for cooperation
of Individuals 1 and 2 who are in strict conflict.
We will see that in this cooperation, where g1 = g2 , there will be an equi-
librium. Consider Individual 3 who would like to charge a compensation,
so he offers an increase in g1 to Individual 1 (without loss of general-
ity) in exchange for a compensation. Maximal willingness of Individual
√ p
1 to pay is c1 = g1 − 3t/2. Maximal willingness of Individual 2 to
make a p counterproposal
√ to Individual 1 to restore symmetric allocation
p
is c2 = 3t/2 − 3t − g1 . It is easy to see that c2 > p c1 if g1 > 3t/2,
√ √
because 3t − g1 + g1 is maximized exactly for g1 = 3t/2, so
p √ p
3t − g1 + g1 ≤ 2 3t/2,
√ p p p
g1 − 3t/2 ≤ 3t/2 − 3t − g1 ,
c1 ≤ c2 .

Put simply, Individual 2 can always restore cooperation between Individ-


uals 1 and 2, facing any bargain between Individuals 1 and 3. As a result,
Individual 3 cannot offer an increase to Individual 1 that would not be
challenged by a counter-offer of Individual 2, and vice versa. The equi-
librium is g1 = g2 = 3t/2. It is interesting that this
p equilibrium occurs if
Individual 1 provides Individual 2 compensation 3t/2 to keep g1 = g2
instead of g2 = p3t, and equivalently Individual 2 provides Individual 1
compensation 3t/2 to keep g1 = g2 instead of g − 1 = 3t. However,
effectively there are no transfers between any players in equilibrium.
p
In Stage 1, the Individuals 1 and 2 expect to get u1 = u2 = 3t/2 − t,
whereas Individual 3 expects to get u3 = −t in the future. In the absence
of compensations, we would have p t = 3/8 by agreement of Individuals 1
and 2, maximizing u1 + u2 = 2 3t/2 − 2t. Individual 3 can compensate
one of the two individuals (suppose Individual 1) to decrease t; he or she
is willing to pay maximal compensation c3 ≥ 0 as long as u3 − c3 ≥ −3/8.
The maximal compensation of Individual 3 is therefore c3 = u3 + 3/8.
Joint cooperation yields t = 3/32.
The reservation payoff of Individual p
1 (given by cooperation with Individ-
ual 3) is u1 + c3 = u1 + u3 + 3/8 = 3t/2 − 2t + 3/8 = 9/16. Identically,
the reservation payoff of Individual 2 (given by cooperation with Individ-
ual 3) is 9/16. Now, are these reservation payoffs sufficient to persuade
Individual 1 or 2 to break their coalition? Yes, because payoff of each
under joint cooperation is lower,
p p
u1 = u2 = 3t/2 − t = 9/16 − 3/8 = 6/16 < 9/16.

39
The coalition of Individuals 1 and 2 is able to face deviation only if they
provide Individual 3 with some compensation that indirectly affects reser-
vation payoffs. Since we have only compensation in pairs, part of the
compensation will be provided by Individual 1 (this is the one that serves
as threat to coalition of 2 and 3), and part by Individual 2 (as a threat to
coalition 1 and 3).
We can immediately impose symmetry and denote the compensation from
each individual c > 0, so total compensation is 2c. Now, Individual 3 gets
payoff −3/8 + 2c if t = 3/8. Therefore, her maximal compensation for a
cooperating partner is lower, c3 = u3 +3/8−2c, and the reservation payoff
of Individual 1 or 2 is 9/16 − 2c. Are reservation payoffs now sufficient to
persuade either Individual 1 or 2 to break their coalition? Not any more
as long as

u1 − c = u2 − c = 6/16 − c ≥ 9/16 − 2c,


c ≥ 3/16.

OK, but is this a solution? Think about coalition of 1 and 2 properly. If


they set tax, they not only directly affect u1 and u2 , but also u3 which
indirectly enters the reservations payoffs. In general, for any t12 agreed
by the coalition of 1 and 2, we have u3 = −t12 . The reservation payoff is
u1 (t = 3/32)+u3 (t = 3/32)−u3 (t12 )−2c and the net payoffs of Individuals
1 and 2 are u1 (t12 )−c = u2 (t12 )−c. We need c set as a minimum satisfying

u1 (t12 ) − c = u2 (t12 ) − c ≥ u1 (t = 3/32) + u3 (t = 3/32) − u3 (t12 ) − 2c,


p
3t12 /2 − t12 − c ≥ 3/8 − 3/16 + t12 − 2c,
p
c = 3/16 − 3t/2 + 2t12 .
p
Net payoff is u1 (t12 ) − c = u2 (t12 ) − c = 2 3t12 /2 − 3t12 − 3/16. This is
maximized for t∗ = 1/6 and g1∗ = g2∗ = 1/4.
c) With pairwise compensations, both coordinated and sequential budgeting
yield identical taxation. The level of taxation is socially optimal, meaning
that it maximizes total utilities. This non-intuitive result stems from the
necessity of winning coalitions to take into account possible counter-offers
of non-members.

5.9 Coalitional bargaining


Parties A and B have the following preferences over tax t and spending g:

uA = 18 + 2(g + t) − (g 2 + t2 )
uB = 2(4t − t2 + 2g) − g 2

40
They bargain about the budget. We only know that they establish a Pareto-
efficient budget (i.e., under such a budget, none of the parties can be better off
without making the other party worse off). Can we expect a deficit here?

SOLUTION By rearranging, we get uA = 20 − (g − 1)2 − (t − 1)2 and uB =


12 − (g − 2)2 − 3(t − 2)2 . In other words, their preferences are quasiconcave in
each dimension, with bliss points (1, 1) and (2, 2). Both bliss points feature zero
deficit.
In the Pareto-efficient budget, the slope of indifference curves of both parties
must be identical.
∂uA ∂uB
∂g g−1 g−2 ∂g
− ∂uA = − =− = − ∂uB
∂t
t−1 3(t − 2) ∂t
t−4
g=
2t − 5
The deficit exists if g > t. Thus, we check if (t − 4)/(2t − 5) > t under
t ∈ (1, 2) (where the agreement will be located). By examining roots of a
polynomial, we find 2t2 − 5t + 4 < 0 holds if t ∈ (1, 2), so in fact g < t. (Be
careful when multiplying the inequality above by the negative term 2t − 5 < 0!)
The answer is no, we don’t expect budget deficit but a budget surplus.

6 Lobbying
6.1 Winner-take-all rent-seeking
Assume a winner-take-all contest for rent R, where groups X and Y compete for
the rent. X invests into rent-seeking x ≥ 0. Y observes x and invests y ≥ 0. No
extra investments are possible. Each group maximizes expected profit (expected
rent minus rent-seeking investment).

a) Suppose that the government gives rent to group X if x ≥ b; otherwise it


is given to group Y . Which (x, y) is in equilibrium?
b) Suppose that the government gives rent to group X only if x > y and
to group Y if y > x. Otherwise, the rent is allocated randomly with
probability 12 each. Which (x, y) is in equilibrium? Is it different to the
previous case?

SOLUTION

Case a) It is only X, not competition between X and Y, that determines who


is awarded a rent. Hence, Y sets y = 0. X faces a take-or-leave offer of R at
price b, which is obviously accepted if b ≤ R, and is not accepted if b ≥ R:

b ≤ R : (x, y) = (b, 0)

41
b ≥ R : (x, y) = (0, 0)

Case b) Solve by backward induction. Consider optimal investment of Y,


y = y(x). Start with x ≥ R. Y can either i) win, ii) lose, or iii) play lottery. Win
(y = x+ε > x) implies negative profit R−y = R−x−ε < 0, loss implies at best
zero profit (y = 0), and lottery implies negative profit R/2 − y = R/2 − x < 0.
Hence, loss is best response, y(x ≥ R) = 0.
If x < R, win implies strictly positive profit if x < y < R. Loss gives, at
best, zero profit. Lottery gives less than win, R/2 − x < R − y = R − (x + ε), if
for win we set a sufficiently small 0 < ε < R/2. Hence, win is the best response,
y(x < R) > x.
X anticipates this behavior. X can either i) win, or ii) lose (lottery is un-
available, because Y never prefers it!). Win is only if x ≥ R, i.e. at best for
x = R. Loss is for x < R, at best x = 0. Both options give zero profits. X is
indifferent, so we have two equilibria:

(x, y) = (0, ε > 0) or (x, y) = (R, 0).


Case a) and Case b) yield identical (but not unique!) equilibrium if and only
if b = R:
(x, y) = (R, 0).

6.2 Redistribution by pressure with tax evasion


We have 2 individuals, one productive and one unproductive. Individual 1
earns pre-tax income Y1 > 0, whereas Individual 2 earns nothing, Y2 = 0. Both
individuals can invest into political influence, ci ≥ 0 (suppose that zero income
is not a binding constraint). The unproductive individual uses influence to tax
income of the productive individual and grab the tax revenues. In contrast, the
productive individual uses the influence to reduce taxation and thereby protect
himself from expropriation. The government responds to political influence by a
proportional rule (v = 1); i.e. taxable income Y is taxed by a tax rate τ ∈ [0, 1]
such that the net gains are
c1 Y
π1 ≡ (1 − τ )Y = ,
c1 + c2
c2 Y
π2 ≡ τ Y = .
c1 + c2
In other words, the flat tax is τ = c1c+c 2
2
and all tax revenues are used as a
subsidy to the unproductive individual. In addition, the productive individual
can invest into tax evasion; protection of eY1 part of income from taxation
(where e ∈ [0, 1]) costs him e2 Y1 ; this investment decreases taxable income from
Y1 to Y = (1 − e)Y1 .
a) If tax evasion is impossible, what are the equilibrium τ ∗ , c∗1 , c∗2 ?
b) If tax evasion is possible, what are the equilibrium e∗ , τ ∗ , c∗1 , c∗2 ?

42
SOLUTION We know from the lecture that rent-seeking investments under
imperfect competition of 2 players for prize of any value Y are for both ci = Y4 ,
and since investments are symmetric, each gains (in net terms) Y2 − Y4 = Y4 .
Y1
a) Without tax evasion, Y = Y1 , hence c∗1 = c∗2 = 4 , and τ ∗ = 21 .

b) With tax evasion, the productive individual thinks about net payoff for
different values of e. Investing into e affects his payoff in three ways: i)
eY1 is saved; ii) e2 Y1 is lost, and iii) rent-seeking yields net gain Y4 < Y41 .
The payoff writes
 
(1 − e)Y1 3 1
π1 = eY1 − e2 Y1 + = Y1 −e2 + e + .
4 4 4
3
By the first-order condition, we get −2e + 4 = 0, hence

3 5 5 1
e∗ = , Y = Y1 , c∗1 = c∗2 = Y1 , τ ∗ = .
8 8 32 2

6.3 Lobbying a bureaucrat


You are a foreign investor and you need to lobby a decisive senior bureaucrat.
There are 2 bureaucrats (A, B), both looking identically important, but only
one is decisive. Lobbying a single bureaucrat costs c > 0.

1. Describe your optimal strategy and derive your expected payoff of playing
this strategy.

2. Suppose there is a lobbyist who has exclusive access to the bureaucrats


and knows who is decisive. He gives you recommendation whom to lobby,
but you decide who will be lobbied. Recommendation is costless, so you
only compensate the lobbyist for the lobbying cost c. Carefully describe
all equilibria. Describe posterior beliefs in equilibrium (i.e., equilibrium
probability that a recommended bureaucrat is decisive). What is your
equilibrium expected payoff? Is it always better than in the previous
case? Can it be lower than in the previous case?

3. What happens if you can sign up a contract with the lobbyist such that he
or she does not charge the lobbying cost if, following his recommendation,
the bureaucrat appears not to be decisive? Again, carefully describe equi-
libria, posterior beliefs that a recommended bureaucrat is decisive, and
also equilibrium payoffs. Are you always better off than in the previous
case where such a contract could not be signed?

4. Now, suppose the lobbyist has non-exclusive access to the bureaucrats.


If you decide to use the lobbyist as an intermediary, you compensate the
lobbyist for lobbying a single bureaucrat by amount l, where c < l < 3c/2,

43
and you are obliged to follow the lobbyist’s recommendation. Again, care-
fully describe equilibria, posterior beliefs that a recommended bureaucrat
is decisive, and also equilibrium payoffs.

SOLUTION

1. Your prior beliefs are 1/2 for each bureaucrat. To describe your strategy,
let p ∈ [0, 1] be the probability that you select bureaucrat A in the first
round. For the second round, it is obvious that you select the remaining
(decisive) bureaucrat with probability 1. The expected payoff is
   
1 1 1 1 3
p (−c) + (−2c) + (1 − p) (−c) + (−2c) = − c.
2 2 2 2 2

Thus, in equilibrium, you can select bureaucrat A with any p ∈ [0, 1].

2. First of all, notice that a lobbyist’s costs are always compensated, so his
payoff is always zero. If a non-decisive bureaucrat is lobbied in the first
round, it is obvious that the other (decisive) bureaucrat is lobbied, so your
extra costs in the second stage are c.
Thus, the problem can be represented by the following simultaneous game:

You/Lobbyist truth lie


follow −c, 0 −2c, 0
not follow −2c, 0 −c, 0

Denote f ∈ [0, 1] your probability of following, and t ∈ [0, 1] the lobbyist’s


probability of telling truth. Your best-response correspondence f (t) is (i)
t < 1/2 : f = 0, (ii) t = 1/2 : f ∈ [0, 1], and (iii) t > 1/2 : f = 1.
The lobbyist is indifferent, so all these strategy profiles can be equilibrium
profiles. Your equilibrium payoffs are equilibrium-dependent: (i) t < 1/2 :
t(−2c) + (1 − t)(−c) = −c(1 + t), (ii) t = 1/2 : −3c/2, and (iii) t >
1/2 : t(−c) + (1 − t)(−2c) = −c(2 − t). Clearly, the equilibrium payoff
is minimized when t = 1/2 (because the advice is useless, only replicates
your prior beliefs), and equals to the payoff in the previous case. In the
other cases, the payoff is bigger (because the advice is at least partially
useful).
Your posterior belief that the recommended bureaucrat is decisive is also
equilibrium-dependent, and equals t. (Conditional probability of selecting
a bureaucrat is t + 1 − t = 1, where only t is for decisive bureaucrat, hence
by Bayes rule, the posterior is t/1 = t.) Again, this shows that t = 1/2 is
an equilibrium where advice is useless, because the prior belief equals the
posterior belief.
3. In the second stage, you know the truth with certainty, so you have to

44
distinguish between following when truth is recommended, f t , and follow-
ing when lie is recommended, f l . To be able to study deviations, denote
the expected payoffs of the last stage as (e1 , e2 ); since in the last stage
you have to find the decisive bureaucrat, and in this case you always have
to pay (regardless whether the decisive bureaucrat was or was not recom-
mended), these payoffs must be (e1 , e2 ) = (−c, 0).
Now, if the lobbyist recommends truth, your following ends the game with
payoffs (−c, 0) and not following proceeds the game with (−c+e1 , 0+e2 ) =
(−2c, 0). Hence, f t = 1, and the expected payoff is (−c, 0).
If the lobbyist recommends lie, your not following ends the game with
(−c, 0) and following gives (0+e1 , −c+e2 ) = (−c, −c) (i.e., game proceeds,
but you don’t pay). You are indifferent, play any f l ∈ [0, 1]. The expected
payoff in this case is (−c, −cf l ). As a result, the second stage gives either
t = 1 and f l ≥ 0, or t < 1 and f l = 0. In any case, the expected payoff of
the second stage is (−c, 0).
With payoff in the second stage, we can rewrite the game:

You/Lobbyist truth lie


follow −c, 0 −c, −c
not follow −2c, 0 −c, 0

We have two equilibria, either (truth, follow) or (lie, not follow). Your
equilibrium payoff is always (−c, 0). Irrespective of the equilibrium, the
game always ends in the first stage. The posterior belief is either zero or
one.

4. In the second-stage, not to using lobbyist means cost c, whereas using


lobbyist gives you at best l > c, hence lobbyist is not used, and the
payoffs are (−c, 0). Enter into the payoff matrix:

You/Lobbyist truth lie


ask lobbyist −l, l − c −l − c, l − c
lobby alone −3c/2, 0 −3c/2, 0

The lobbyist is always indifferent. Your expected payoff from asking lob-
byist is (−l)t + (−l − c)(1 − t) = −l + (1 − t)(−c), and expected payoff
from lobbying alone is −3c/2. There is a critical level t∗ = (2l − c)/2c
that determines your best-response: (i) t < t∗ , you lobby alone, (ii)
t = t∗ , you are indifferent, and (iii) t > t∗ , you ask the lobbyist. From
c < l < 3c/2, you can easily derive that all three equilibria types exist,
because 1/2 < t∗ = (2l − c)/2c < 1.

45
7 State aid
7.1 Bailouts under government’s budget constraint
Like in the lecture, let managers determine the type of business. To find a good
business, the manager must pay 0 < c < b. To find bad business is costless.
Firm with bad business must be bailed out to survive. The manager obtains
wage b > 0 if a company survives.
Suppose we have n firms. Each manager has different ci (different ability),
but obtains identical (market) wage b in the case of survival. The government is
willing to bail out all bad firms (each bailout costs 1), but is restricted by having
only 0 < m < n in the budget (hence can make only m bailouts at maximum).

a) Suppose that firms expect the probability of an average bad firm being
bailed out to be β ∈ [0, 1]. Derive the number of firms that will choose a
bad business and thereby demand bailout, d = d(β).
b) Derive the equilibrium probability of rescue, β ∗ (implicit solution is suffi-
cient; I recommend to plot a graph to illustrate the solution).

c) What is the number of firms that are bailed out in equilibrium?

d) For the equilibrium number of firms that demand bailout, d∗ , do we have


d∗ < m, d∗ = m or d∗ > m?

SOLUTION It is useful to define distribution function of managers ability,


F (c), where F (0) = 0 and F (b) = 1 (by assumption ∀i : ci < b).

a) A manager compares two options, good project (payoff b − ci ) and bad


project with demand for bailout (payoff βb). Bailout is demanded if ci >
(1 − β)b. Using the distribution function, we have

d(β) = n[1 − F ((1 − β)b)].

We will denote it df (β) since this function describes behavior of firms.

b) The probability of bailout is given as follows: i) for sufficiently low bailout


demands (d ≤ m), all projects can be bailed out, β = 1; ii) otherwise
(d > m) we have rationing and probability of bailout is β = m d . We
can define its inverse function dg (β) = m β , which gives us the number of
demands corresponding to a probability of bailout β; superscript g is here
to capture that this function describes behavior of the government.
Equilibrium is characterized by df (β ∗ ) = dg (β ∗ ). Hence, the equilibrium
probability of rescue is implicitly given by

m
β ∗ [1 − F ((1 − β ∗ )b)] = .
n

46
Figure 7: Demand and supply of bailouts

The equilibrium condition df (β ∗ ) = dg (β ∗ ) can also be described on the


following figure:

c) Using definition of df (β), we have

m
d∗ = .
b∗
d) On the graph, we immediately see β ∗ ∈ ( m
n , 1). From c), we get

d∗ ∈ (m, n).

In other words, we always have some firms that demand bailout but are
not satisfied (demands are rationed = rationing). Moreover, we always
have that some firms demand bailout and are satisfied. Probability of
bailout is strictly positive, but never equal 1.

8 Rent-seeking
8.1 Entry into a public tender
Suppose a politician needs a highway of value v > 0. There are three construc-
tion companies, with costs per highway 0 < c1 < c2 < c3 < v. These costs
are known to all. The company which builds a highway is identified in a ten-
der. Tender is organized such that each invited company submits a sealed bid
of offer price, pi . By tender law, the politician has to select the lowest price,
p∗ = min pi . The winner of tender has profit pi − ci , and the losers have zero.
The law however can not determine how the politician sets tender conditions.
Suppose that the politician can set conditions in any way, that is, he invites the
companies. This opens a window of rent-seeking. Specifically, suppose each
company promises bi ≥ 0 to the politician. The politician then decides on

47
invitations. Those who are invited then have to pay the promise bi . Consider 4
options how to organize rent-seeking:

• Fixed entry fee, B: A company has to promise at least bi ≥ B to be


invited.

• Winner-take-all contest: Only the company with max bi is invited.

• Pairwise contest: Only the company with min bi is not invited.

• No rent-seeking: All companies are invited.



The politician
P values both saved public money (v−p ) as well as rent-seeking
P
contributions ( i bi ). Let the relative weight be w : 1, π = w(v − p) + i bi .

a) Find equilibrium prices in tender for any subset of participants (i.e., a


single bidder, two bidders, or three bidders). Solve all questions using
only pure strategies.
P
b) Find total rent seeking contributions in equilibrium, i bi , for all positive
fixed entry fees, B > 0.
c) Which of the fixed entry fees does the politician prefer?

d) Find equilibrium for a winner-take-all contest, pairwise contest, and no


rent-seeking.

e) Among options 2–4, when does the politician prefer a winner-take-all con-
test? When a pairwise contest? When no rent seeking?

f) Which of the four options is preferred by the politician?

SOLUTION Q1. Tender prices The non-empty subsets of participants are


as follows:

• Single bidder: The politician can only reject price p ≥ v, so p∗ = v.2

• Two bidders with costs cj < ck : Any offer price of a bidder i ∈ {j, k} has
to recover costs, pi ≥ ci . The bidder j with the lower minimal offer price
beats the bidder k by setting any cj < pj ≤ ck (then, bidder k has no
incentive to bid less and win, because it implies negative profits; bidder j
has positive profit pj − cj > 0). Thus, p∗ = ck .

• Three bidders: Like above, bidder 1 beats bidder 2 (and also bidder 3) by
setting c1 < p1 ≤ c2 < c3 . Thus, p∗ = c2 .
2 The politician is exactly indifferent between accepting offer and rejecting it for p∗ = v.

We would need to set p∗ = v − ε, where ε > 0 is infinitesimally small, to make him strictly
reject the offer. To avoid the nuisance of introducing these negligible variables, we normally
use assumption that if individuals are indifferent between two actions, they play the action
that we (the modelers) need.

48
Q2. Equilibria The company is either invited or not. So, if a company
decides not to be invited, it is optimal to play bi = 0. If it wants to be invited,
it is optimal to play bi = B, not more. We can now construct a three-player
simultaneous game.

Table 16: Tender participants if Company 3 doesn’t pay the fee

b3 = 0 b2 = 0 b2 = B
b1 = 0 ∅ {2}
b1 = B {1} {1, 2}

Table 17: Tender participants if Company 3 pays the fee

b3 = B b2 = 0 b2 = B
b1 = 0 {3} {2, 3}
b1 = B {1, 3} {1, 2, 3}

Now, we construct payoff tables. We use the tender prices derived in Q1.

Table 18: Payoffs if Company 3 doesn’t pay the fee

b3 = 0 b2 = 0 b2 = B
b1 = 0 0, 0, 0 0, v − c2 − B, 0
b1 = B v − c1 − B, 0, 0 c2 − c1 − B, −B, 0

Table 19: Payoffs if Company 3 pays the fee

b3 = B b2 = 0 b2 = B
b1 = 0 0, 0, v − c3 − B 0, c3 − c2 − B, −B
b1 = B c3 − c2 − B, 0, −B c2 − c1 − B, −B, −B

It is easy to see that Company 3 pays the fee only if B < v − c3 . The only
subset of tender players involving Company three is {3}. From Table 18, we
see that {1, 2} is not an equilibrium. So, the only suspected equilibria are as
follows:

1. No bidder (∅): This requires B > v − c1 . Total rent-seeking contributions


are zero.

2. Single player 1: By checking best responses, we see that this requires just
B ≤ v − c1 . Total contributions is B (only Company 1 contributes).

49
3. Single player 2: Again check best responses, and see that this requires c2 −
c1 < B ≤ v − c2 . Total contributions is B (only Company 2 contributes).
4. Single player 3: Again check best responses, and see that this requires c3 −
c1 < B ≤ v − c3 . Total contributions is B (only Company 3 contributes).

Q3. Entry fees As long as B ≤ v − c1 , the politician can expect total


contributions B, irrespective of which equilibrium is selected. A single tender
bidder then submits p∗ = v, and total politician’s payoff is:
X
π = w(v − p∗ ) + bi = B.
i

Entry fee maximizing politician’s payoff is B = v − c1 , and π = v − c1 .

Q4. Other options If all three are allowed to enter, it is clear that a unique
equilibrium is not to pay anything, b1 = b2 = b3 = 0. The politician’s payoff is
X
π = w(v − p∗ ) + bi = w(v − c2 ).
i

In pairwise contest, denote the company that remains out as loser. Now, can
Company 1 be a loser? If so, then winner in tender is Company 2, and payoff of
Company 2 must be positive c2 − c3 − b2 > 0, or b2 < c2 − c3 . Company 1 is not
willing to get into tender only if the payment b2 is prohibitively high. Since its
expected rent is c2 − c1 , this requirement writes c2 − c1 − b2 < 0, or b2 > c2 − c1 .
However, this together implies impossibility, b2 < c2 − c3 < c2 − c1 < b2 .
Company 1 therefore must be in tender. If Company 1 is expected to be
in tender, the expected rent of any of the other companies is zero. Hence,
Companies 2 and 3 set b2 = b3 = 0 and their chance of getting into tender splits
in half. Under such bids, Company sets b1 = ε > 0 just to be sure to be in
tender. Notice that neither of Companies 2 or 3 has an incentive to outbid the
other and improve a chance to be in tender, because this would imply a net loss.
With equal probability of Companies 2 and 3 to be in tender, the politician’s
expected payoff is precisely
X .  1

π = w(v − p∗ ) + bi = w v − (c2 + c3 ) < w(v − c2 ).
i
2
Winner-take-all option is not difficult either. The winner’s rent is v − ci
(the winner will be a single bidder and offers price p∗ = v). Thus, Company 1
expects the relatively highest rent (conditional on victory). So, if Company 1
.
pays the politician b1 = v − c2 − ε = v − c2 , none of Companies 2 and 3 will bid
above that to capture exclusive entry. To sustain bid b1 = v − c2 , notice that
we need b2 = v − c2 , but this is only promised by Company 2, not actually paid.
The politician’s expected payoff is
X
π = w(v − p∗ ) + bi = v − c2 .
i

50
Q5. Which of the contests? Clearly, pairwise contest is always worse
than no-rent seeking. The problem for the politician is that elimination of a
competitor is not valuable for the other competitors, so they do not contribute
anything as a rent-seeking expenditure. In contrary, the elimination makes the
competition less intensive, because full entry implies that the winner has to beat
the second-best alternative.
To compare winner-take-all rent-seeking with no rent-seeking, it only de-
pends on w. If w < 1 (private contributions weight much more than savings in
the budget), then winner-take-all rent seeking contest is preferred as it pushes
the strongest company to outbid the competitors by valuable private contribu-
tions to the politician. Tender is then virtually meaningless. If w > 1 (savings
in budget matter a lot), it is better to make the company pay high price offi-
cially in tender.

Q6. Preferred options The non-dominated options give v − c1 (entry fee),


w(v − c2 ) (no rent seeking), and v − c2 (winner-take-all contest). If w < 1, entry
fee is preferred. If w > 1, it depends:
v−c1
• Moderate w, 1 < w < v−c2 : entry fee is still preferred
v−c1
• Large w, w > v−c2 : no rent seeking

In other words, either you have full competition, or you try to restrict com-
petition just to a single bidder. If it pays off to restrict it to the single bidder,
then fixed entry fee is more valuable. Fixed entry fee targets the bidder with
the highest valuation, and tries to share his profits without introducing simul-
taneous rent-seeking competition with the other companies. This demonstrates
why rent-seeking is difficult to detect if the politician so strong that his entry
fee is taken as non-negotiable.

8.2 Rent-seeking in Gambit


Tie is important if the strategy set is discrete. We may eliminate ties in the
following way: Suppose 2 players, rent R = 6 and strategy sets S1 = {0, 2, 4, 6},
S2 = {1, 3, 5, 7}.

1. Select any 1 < v < ∞, construct a strategic game in Gambit and find all
Nash equilibria. Discuss (especially equilibrium payoffs).

2. Select v = ∞, construct a strategic game in Gambit and find all Nash


equilibria. Discuss and compare to the previous case.

As output, I prefer PDF-converted print-outs.

51
x2i
SOLUTION Consider v = 2, hence πi = x2i +x2−i
R−xi . We construct a payoff
3
matrix (π1 , π2 ) (e.g., in MS Excel ) and enter into Gambit (see the following
table).

The unique equilibrium strategy profile consists of mixed strategy ( 31 65


96 , 96 , 0, 0)
5 91
for Player 1 and mixed strategy ( 96 , 96 , 0, 0) for Player 2. This gives Player 1
expected payoff zero, and Player 2 expected payoff 47 .
In the continuous case, we would get a unique symmetric equilibrium x1 =
x2 = R2 , where the rent is completely dissipated, and surplus for players is zero.
This is obtained from the following FOC and symmetry x1 = x2 :

dπ1 2x1 x22


= 2 R−1=0
dx1 (x1 + x22 )2

In our case, however, Player 2 can play x2 = 3 = R2 , but Player 1 not. What
we observe is that Player 1 tends to play lower offers (in 2/3 cases playing
x1 = 2, and in 1/3 cases x1 = 0), and Player 2’s best response are lower offers
as well. The fact that each player’s expected offer is less than the theoretical
offer ( 65 5 91
96 · 2 < 3, 96 · 1 + 96 · 3 < 3) explains why surplus is positive. Interestingly,
however, the surplus is fully captured by Player 2 whose strategy set is less
‘constrained’, meaning that the available actions are closer to the theoretical
(continuous-type) equilibrium best responses.
Consider now v = ∞. It predicts full dissipation through mixed strategies
imposing equal probability on each actions. The payoff matrix is as follows:

Gambit computes two equilibria. In each, Player 2’s mixed strategy is


( 13 , 31 , 31 , 0), hence its expected offer is 3.

• Offensive play. Player 1 may play (0, 13 , 31 , 31 ), hence its expected offer
is 4. There are 9 events, each occurring with probability 91 . Player 1
wins in 6 events, and loses in 3 events. Hence, its expected payoff is zero
( 69 · 6 − 4 = 0). Player 2 wins in 3 events.
3 In Excel, it is convenient to compute nominators and denominators in payoffs of Player 1

and 2 separately so that we can enter rational numbers into Gambit.

52
• Defensive play. Player 1 may play ( 13 , 31 , 31 , 0), hence its expected offer
is 2. There are 9 events, each occurring with probability 91 . Player 1
wins in 3 events, and loses in 6 events. Hence, its expected payoff is zero
( 39 · 6 − 2 = 0). Player 2 wins in 6 events.

We may conclude: (i) Player 2’s advantage in previous case now may turn
into a disadvantage. (ii) We may obtain both underdissipation (positive surplus)
but also overdissipation (negative surplus). Player 2 cannot protect itself from
negative surplus since his minimum offer cannot be zero, but one.

8.3 Strategic leadership in Gambit


We have an extensive game of rent-seeking with R = 4, v = 1, where Player 1
plays x1 ∈ S = {0, 1, 2}, then Player 2 observes his choice, and plays x2 ∈ S.

1. Construct an extensive game and find all Nash equilibra. Discuss.

2. Suppose now that Player 1, observing Player 2’s action, can add up x01 ∈
S. Construct an extensive game and find all Nash equilibra. Create
an equivalent strategic game and show the strategy sets of both players.
Discuss and compare to the previous case.

SOLUTION AND DISCUSSION. Part 1 We prepare payoff matrix,


where we assume that for (x1 , x2 ) = (0, 0), rent is not allocated. (Recall our
discussion from the lecture.) The game tree is as follows:
Our theoretical prediction is based on subgame-perfection (and backward
induction). In any subgame initiated by Player 1’s action, Player 2 responds by
x2 = 1. Player 1 anticipating the best response selects x1 = 1, and equilibrium
payoffs are 1 each. This is equivalent to a simultaneous game with a continuous
set of actions, which we solved in the lecture. Hence, leadership yields no
strategic advantage.
In simulation, Gambit computes 45 equilibria. Equilibria 1–21 are not
subgame-perfect. Why? By subgame-perfection, Player 2 on the equilibrium
path responds to x1 = 1 by playing x2 = 1. Here, Player 2 instead mixes ac-
tions {0, 1, 2}, and this makes Player 2 to x1 = 2. These equilibria are not very
appealing, since they involve threats that are not realized on the equilibrium
path, and remain off equilibrium path (even if they constitute Nash equilibria).
Payoffs are always ( 32 , 31 ).
Equilibria 22–45 are all subgame-perfect. Why? Player 1 plays x1 = 1 and
then, in infoset defined by observing Player 1 to play x1 = 1, Player 2 plays
x2 = 1. The only difference between these equilibria are in nodes that are off
equilibrium path. Payoffs are always (1, 1), exactly as given in theory.

Part 2 We now add possibility of Player 1 to incrementally increase its offer.


The game tree now looks as in the following figure. As usually, we solve first
analytically subgame-perfect equilibria (SPNE). By backward induction, Player

53
1 in last Stage plays x01 = 1 in his/her infosets 2–4 and x01 = 0 in infosets 5–10.
(In other words, Player 1 exploits his option of incrementally increasing offer
only if he starts with zero offer, x1 = 0.) Anticipating this, Player 2 plays always
x2 = 1 regardless of x1 . Finally, Player 1 anticipating x2 = 1 and his/her best
responses in his/her infosets 2–10, mixes in infoset 1 actions x1 = 0 and x1 = 1.
The expected payoff as well as payoff in each event are (1, 1), exactly as in a
benchmark simultaneous model.
If we allow Gambit to compute equilibria, it runs into trouble if we demand
all equilibria. The reason is computational complexity, as we shall see in detail
below. Instead, we may ask for a single equilibrium. Then, Gambit computes
SPNE where actions x1 = 0 and x1 = 1 are mixed with probability 21 each. This
is reflected in the following figure.
We may also for as many equilibria as possible, and we obtain many non-
subgame-perfect equilibria. I have got 19 non-SPNE, with expected payoffs
( 13 , 32 ), ( 23 , 31 ), (1, 1) and (2, 0). Alternatively, one could use quantal-response
equilibria, which is a solution allowing for systematic noise in actions of players,
and this converges to the SPNE equilibrium with uniform mix.
The strategic form reveals complexity of even so structurally relatively sim-
ple problems: Pure strategy set of Player 1 has 310 elements (3 actions per 10
infosets), Player 2’s pure strategy set comprises 33 elements. The set of pure

54
strategy profiles thus contains 313 = 1 594 323 elements that have to be exam-
ined. Even worse, in mix-strategy terms, recall that a probability distribution
over three actions must be element in two-dimensional simplex, to be denoted
P . Then, a Player 1’s mixed strategy x1 defines probabilities over actions in
each infoset, hence it is a 10 × 2-matrix, x1 ∈ P 10 :
 1 
p0 p11
 p20 p21 
 
x1 =  . .. 
 .. . 
10 10
p0 p1
For Player 2, a mixed strategy is a 3 × 2-matrix, x2 ∈ P 3 . The set of
mixed-strategy profiles is thus P 10 × P 3 = P 13 , computationally equivalent to
a 26-dimensional simplex.
To conclude: The introduction of strategic leadership has changed the game
only if we think of non-subgame-perfect equilibria. Then, however, there are
multiple equilibria based on non-realized threats. If we come back to subgame-
perfection, neither the strategic leadership itself nor the leader’s possibility to
unilaterally increase the initial offer affects the equilibrium payoff. We may thus
call our textbook model relatively robust to modifications in terms of leadership.

8.4 Credit constraint


Consider contest over rent R, where Player 1 can invest any amount x1 ≥ 0,
but Player 2 can invest only x2 ∈ [0, z], where 0 < z < R (possibly given his
low initial income and credit constraint).
1. For the winner-take-all rent-seeking contest, derive equilibria for all pos-
sible realizations of z.
2. For the proportional contest, derive equilibria for all possible realizations
of z.

SOLUTION Winner-take-all Any action of Player 1 where x1 > z brings


victory, hence payoff R − x1 . Thus, there is (in limit) a reservation payoff R − z
that must be provided by any alternative action. Similarly for Player 2, any
action must provide at least a reservation payoff 0. Consider equilibria where
all feasible actions are in the support of players. Then, the conditions stated
above require:
F2 (x1 )R − x1 = R − z
F1 (x2 )R − x2 = 0
Rewriting, for x ∈ [0, z),
x 0 1
F1 (x) = , F1 (x) = ,
R R
x z 1
F2 (x) = + 1 − , F20 (x) = .
R R R

55
For the sake of completeness, F1 (x) = F2 (x) = 1 if x ≥ z.

Proportional
√ contest As we know from the lecture, best responses are x∗i (x−i ) =
√ √
x−i ( R − x−i ). These are increasing in x−i up to x−i = R4 , and decreasing
since then. The intersection is for x1 = x2 = R4 . In our case, the only difference
√ √ √
is that for Player 2, best response is x∗i = min{ x−i ( R − x−i ), z}. Thus,
we have two possibilities:
R
• z≥ 4: The constraint doesn’t apply, and in equilibrium (x1 , x2 ) = ( R4 , R4 ).

• z < R4 : The constraint applies, and equilibrium is characterized by (x1 , x2 ) =


√ √ √
( z( R − z), z).

8.5 Trade policy


Importers have brought m > 0 units of a good from China into a European
country, and now want to sell it. Competitive domestic producers of the good
decide on supply s > 0 of the good, where their cost of production is s2 /2.
Domestic demand for the good is D(p) = a − p, where p is domestic price.
Assume a > m.
The politician decides either (i) not to intervene (free market), (ii) impose a
tax at value t on each unit sold from imports (tariff), or (iii) restrict the imports
by licensing only q < m units (quota).

a) Find domestic production, total production, and market price under free
market, when any tariff t ≥ 0 applies, and when quota system q ∈ [0, m]
applies.
b) Derive rent of domestic producers under tariff and quota system.
c) Derive revenues (proceeds from sales) of importers under tariff and quota
system.
d) Suppose tax beneficiaries and domestic producers create a coalition that
maximizes the sum of their payoffs. Which system would they prefer?
e) Suppose tax beneficiaries and importers create a coalition that maximizes
the sum of their payoffs. Which system would they prefer?
f) Suppose domestic producers and importers create a coalition that maxi-
mizes the sum of their payoffs. Which system would they prefer?

SOLUTION Answer a): The costs of imports are sunk costs for the im-
porters. Thus, supply of imports from abroad is constant irrespective of price,
Sm (p) = m. Domestic producers are competitive, so they supply such that the
price equals the marginal cost. The marginal cost is
∂s2 /2
= s,
∂s

56
where from equality with price s = p we get that s = Sd (p) = p.

1. Free trade: Market clears, D(p) = a − p = p + m = Sd (p) + Sm (p), i.e.,


p∗ = (a − m)/2. Total amount is D∗ = D(p∗ ) = (a + m)/2. Domestic
production is Sd∗ = p∗ = (a − m)/2.

2. Tariff: Supply from imports is constant at m, if price is non-negative.


Thus, the importers will respond to an effective price p − t by Sm (p) = m
if t ≤ p, and by Sm (p) = 0 if t ≥ p. Thus, if t ≤ t∗ , free market allocation
preserves.

3. Quota: With quota, the only difference to the free market is lower supply
from imports, Sm = q ≤ m. Market clears, D(p) = a − p = p + q =
Sd (p) + Sm (p). Price goes up, pq = (a − q)/2 > p∗ , total amount is lower,
Dq = D(pq ) = (a + q)/2 < D∗ . Domestic production is however larger,
Sdq = pq > Sd∗ .

Answer b): Under tariff t ≤ p∗ , the domestic producers cash in zero rent,
because the equilibrium market price as well as sales are identical. Under quota,
the rent is positive, and it is the difference between profits in the two regimes.

1. Free market: The profits are pSd − Sd2 /2 = (p∗ )2 /2.

2. Quota: The profits are pSd − Sd2 /2 = (pq )2 /2.

The rent is as follows:


" 2  2 #
1 a−q a−m 1
R(q) = − = (m − q) (2a − m − q)
2 2 2 8

Answer c): Under tariff, they receive (p − t)m. Under quota, they receive
pq q = (a − q)q/2.

Answer d): As we know, tariffs do not change production or market price.


The only effect in our model is that some importers’ profits are redistributed
towards tax beneficiaries. Importers are not part of the coalition, so the optimal
tariff system is to set t = p∗ . The tariff revenues are tm = (a − m)m/2.
For quota system, it depends whether tax beneficiaries are also consumers.
If not, tax beneficiaries are not affected, and quota is set such that it maximizes
rent
q = arg max R(q) = arg max(m − q)(2a − m − q).
The rent function falls in q (the slope is 2q − 2a < 0, because q ≤ m < a).
Hence the optimal quota is q = 0, and rent is m(2a − m). Now, compare tariff
revenues and the maximal rent to see which is better:

(a − m)m
≥ m(2a − m) : m ≥ 3a
2

57
By assumption m < a, quota that completely prohibit imports is better for
the coalition that a non-distorting tariff. The intuition is that the coalition
doesn’t care for a loss in consumer surplus.
What if consumers are at the same time tax beneficiaries? Then it’s clear
that an optimal tariff system is still t = p (money for nothing). How to get the
optimal quota? A marginal change in rent (for q < m) is:

∂R(q)
= 2q − 2a < 0
∂q

A marginal change in consumer surplus (from linearity of demand curve,


consumer surplus simply writes C(p) = [D(p) · D(p)]/2):
 
dC(p) ∂[D(p)]2 /2 ∂(a − p)2 /2 ∂p a−p
= = = >0
dq ∂q ∂p ∂q 2

The marginal change in total payoffs of the coalition is


 
dC(p) + R(q) a−p 9q − 7a
= 2q − 2a + = .
dq 2 4

This function has u-shape, so the total payoff is maximized in either of ex-
trema, q ∈ {0, m}. To find out where, we compare the values. For convenience,
denote for q = 0 the price p0 := pq (0) = a/2 and production/consumption
D0 := Dq (0) = a/2.

a2 a2 + 8am − 4m2
q = 0 : C(p0 ) + R(0) = + m(2a − m) =
4 4
2 2
(a + m) a + 2am − m2
q = m : C(p∗ ) + R(m) = +0=
4 4
By comparing, from a > m, the optimal quota is fully prohibitive quota,
q = 0. Now, is full-quota system better than tariff system? Compare the total
payoffs:

a2 a2 + 8am − 4m2
q = 0 : C(p0 ) + R(0) = + m(2a − m) =
4 4
2 2
(a + m) a − m a + 4am − m2
t = p∗ : C(p∗ ) + p∗ m = +m =
4 2 4
By comparing, from a > m, we get that the full-quota system is better than
a non-distorting tariff. Thus, it is irrelevant whether tax beneficiaries are also
consumers; in any case, a full restriction on imports is imposed by the coalition.

Answer e): Tariff doesn’t distort markets, only redistribute from importers
to tax beneficiaries. Thus, the coalition is indifferent over tariffs and free-trade.
Quota reduces consumption, increases price, which decreases consumer surplus

58
C(p). Hence, if at least some tax beneficiary is also a consumer, than the op-
timal quota is a non-restrictive (free trade) quota q ≥ m. As a result, this
coalition has no incentive to modify the free market regime.

Answer f ): Tariff revenues go into hands of tax beneficiaries, who are not
in this coalition, hence the optimal tariff is zero, and free trade allocation is
preserved. For quota, maximize the total payoff of domestic producers and
importers:

(pq )2 1
q = arg max + pq q = arg max (a − q)(a + 3q)
2 8
It is easy to derive that this payoff is u-shaped, hence is maximized for
extrema, q ∈ {0, m}. By comparing the values, one gets that the maximum is
full quota (q = 0) if the amount of imports is low, m < 2a/3. Otherwise, the
maximum is for loose quota (q = m). The intuition is that importers agree
with complete restriction of imports to create artificial scarcity, if they can be
compensated for forgone profits.
Notice that a full-quota system effectively means that all the imported goods
must be destroyed. This is what we often observe with imports that involve
faked brands.

9 Reform
9.1 Enlargement and costly compliance
Recall the enlargement game where in Stage 1, two acceding countries simulta-
neously decide on compliance, and in Stage 2, the club decides whom to admit.
For acceding country, accession benefit is Y , and compliance cost is 0 < x < Y .
For club, entry benefits are 0 < b < B (b for one acceding country, B for two
acceding countries), which diminish by 0 < c < C in the case of non-compliance
(c for one non-compliant country, C for two non-compliant countries), where
B − c < b < B − C + c.

1. Suppose there is an extra Stage 0, where the club pre-commits to an acces-


sion rule. The rule specifies which country (or countries) will be admitted
for all possible combinations of countries’ decisions on compliance. Derive
all plausible rules. Which rule should the club choose?

2. Suppose there is an extra Stage 0, where the club pre-commits to Country


1 that it will compensate its compliance cost x. What is equilibrium (or
equilibria)? Be careful. (Hint: Think of indifference.)

3. What happens if the club is not only willing to cover the compliance
cost, but also provides an extra bonus for compliance, so that the total
compensation for Country 1 (if it complies) is x̄ > x.

59
4. Suppose that the club needs that both countries comply with certainty,
but raising money for compensations is costly. Does the club compensate
a single country or both countries?

SOLUTION Recall the EU’s payoffs:

Compliance/Entry None One Two


None complies 0 b−c B−C
One complies 0 b B−c
Both comply 0 b B

Answer a) It is okay to solve this in pure strategies. The rule is a vector


function over compliance actions (c1 , c2 ), f (c1 , c2 ) : {0, 1} × {0, 1} −→ {0, 1} ×
{0, 1}. The club wants to have a maximal payoff B with certainty. This means
that compliance of both, (c1 , c2 ) = (1, 1), must be a unique Nash equilibrium of
the two acceding countries.
To get so, assure that compliance is a dominant strategy.

• If Country 1 doesn’t comply and Country 2 complies, Country 1 should be


better off by complying. This is if and only if Country 1 is not admitted
if it doesn’t comply, but is admitted if it complies, 0 < Y − x. (Think of
impossibility of all other possibilities.)

• If neither Country complies, any Country should be better off by comply-


ing. This is if and only if a Country is not admitted if it doesn’t comply,
but is admitted if it complies, 0 < Y − x.

Thus, the optimal club rule is f (c1 , c2 ) = (c1 , c2 ).

Answer b) The payoff matrix in the non-cooperative game of the two coun-
tries looks as follows:

AC1 /AC2 Comply Don’t comply


Comply Y, Y − x Y, 0
Don’t comply 0, Y − x Y, Y

Thus, nothing changes in comparison to the original game. Although Coun-


try 1 strictly prefers compliance if Country 2 doesn’t comply, it is indifferent
between compliance and non-compliance if Country 2 complies, hence the case
when no country complies (and Country 1 is not compensated) is also an equi-
librium.

Answer c) Unlike the previous case, for Country 1, compliance is a dom-


inant strategy, because Y − x + x̄ > 0 and Y − x + x̄ > Y . Thus, the only
equilibrium is that both comply, and payoffs are (Y − x + x̄, Y − x).

60
Answer d) In the previous answers, we can see that compensating a country
by less that x is useless. Paying slightly above x to a single country is sufficient to
induce full compliance. As a consequence, only one country will be compensated:
The incentive to comply for one Country makes the other Country also comply.

61
62
63

You might also like