Chapter 1

Introduction
1.1 W HAT IS G AME T HEORY ?

We, humans, cannot survive without interacting with other humans, and ironically, it sometimes seems that we have survived despite those interactions. Production and exchange require cooperation between individuals at some level but the same interactions may also lead to disastrous confrontations. Human history is as much a history of fights and wars as it is a history of successful cooperation. Many human interactions carry the potentials of cooperation and harmony as well as conflict and disaster. Examples are abound: relationships among couples, siblings, countries, management and labor unions, neighbors, students and professors, and so on. One can argue that the increasingly complex technologies, institutions, and cultural norms that have existed in human societies have been there in order to facilitate and regulate these interactions. For example, internet technology greatly facilitates buyer-seller transactions, but also complicates them further by increasing opportunities for cheating and fraud. Workers and managers have usually opposing interests when it comes to wages and working conditions, and labor unions as well as labor law provide channels and rules through which any potential conflict between them can be addressed. Similarly, several cultural and religious norms, such as altruism or reciprocity, bring some order to potentially dangerous interactions between individuals. All these norms and institutions constantly evolve as the nature of the underlying interactions keep changing. In this sense, understanding human behavior in its social and institutional context requires a proper understanding of human interaction. Economics, sociology, psychology, and political science are all devoted to studying human behavior in different realms of social life. However, in many instances they treat individuals in isolation, for convenience if not for anything else. In other words, they assume that to understand 1

2

Introduction

one individual’s behavior it is safe to assume that her behavior does not have a significant effect on other individuals. In some cases, and depending upon the question one is asking, this assumption may be warranted. For example, what a small farmer in a local market, say in Montana, charges for wheat is not likely to have an effect on the world wheat prices. Similarly, the probability that my vote will change the outcome of the U.S. presidential elections is negligibly small. So, if we are interested in the world wheat price or the result of the presidential elections, we may safely assume that one individual acts as if her behavior will not affect the outcome. In many cases, however, this assumption may lead to wrong conclusions. For example, how much our farmer in Montana charges, compared to the other farmers in Montana, certainly affects how much she and other farmers make. If our farmer sets a price that is lower than the prices set by the other farmers in the local market, she would sell more than the others, and vice versa. Therefore, if we assume that they determine their prices without taking this effect into account, we are not likely to get anywhere near understanding their behavior. Similarly, the vote of one individual may radically change the outcome of voting in small committees and assuming that they vote in ignorance of that fact is likely to be misleading. The subject matter of game theory is exactly those interactions within a group of individuals (or governments, firms, etc.) where the actions of each individual have an effect on the outcome that is of interest to all. Yet, this is not enough for a situation to be a proper subject of game theory: the way that individuals act has to be strategic, i.e., they should be aware of the fact that their actions affect others. The fact that my actions have an effect on the outcome does not necessitate strategic behavior, if I am not aware of that fact. Therefore, we say that game theory studies strategic interaction within a group of individuals. By strategic interaction we mean that individuals know that their actions will have an effect on the outcome and act accordingly. Having determined the types of situations that game theory deals with, we have to now discuss how it analyzes these situations. Like any other theory, the objective of game theory is to organize our knowledge and increase our understanding of the outside world. A scientific theory tries to abstract the most essential aspects of a given situation, analyze them using certain assumptions and procedures, and at the end derive some general principles and predictions that can be applied to individual instances. For it to have any predictive power, game theory has to postulate some rules according to which individuals act. If we do not describe how individuals behave, what their objectives are and how they try to achieve those objectives we cannot derive any predictions at all in a given situation. For example, one would get completely different predictions regarding the price of wheat in a local market if one assumes that farmers simply flip a coin and choose between $1 and $2 a pound compared to if one assumes they try to make as much money as possible. Therefore, to bring some

Game theory studies strategic interactions

rules of the game

1.1. What is Game Theory?

3

discipline to the analysis one has to introduce some structure in terms of the rules of the game. The most important, and maybe one of the most controversial, assumption of game theory which brings about this discipline is that individuals are rational. Definition. An individual is rational if she has well-defined objectives (or preferences) over the set of possible outcomes and she implements the best available strategy to pursue them. Rationality implies that individuals know the strategies available to each individual, have complete and consistent preferences over possible outcomes, and they are aware of those preferences. Furthermore, they can determine the best strategy for themselves and flawlessly implement it. If taken literally, the assumption of rationality is certainly an unrealistic one, and if applied to particular cases it may produce results that are at odds with reality. We should first note that game theorists are aware of the limitations imposed by this assumption and there is an active research area studying the implications of less demanding forms of rationality, called bounded rationality. This course, however, is not the appropriate place to study this area of research. Furthermore, to really appreciate the problems with rationality assumption one has to first see its results. Therefore, without delving into too much discussion, we will argue that one should treat rationality as a limiting case. You will have enough opportunity in this book to decide for yourself whether it produces useful and interesting results. As the saying goes: “the proof of the pudding is in the eating.” The term strategic interaction is actually more loaded than it is alluded to above. It is not enough that I know that my actions, as well as yours, affect the outcome, but I must also know that you know this fact. Take the example of two wheat farmers. Suppose both farmer A and B know that their respective choices of prices will affect their profits for the day. But suppose, A does not know that B knows this. Now, from the perspective of farmer A, farmer B is completely ignorant of what is going on in the market and hence farmer B might set any price. This makes farmer A’s decision quite uninteresting itself. To model the situation more realistically, we then have to assume that they both know that they know that their prices will affect their profits. One actually has to continue in this fashion and assume that the rules of the game, including how actions affect the participants and individuals’ rationality, are common knowledge. A fact X is common knowledge if everybody knows it, if everybody knows that everybody knows it, if everybody knows that everybody knows that everybody knows it, an so on. This has

We assume that individuals are rational.

4

Introduction

We assume that the game and rationality are common knowledge

some philosophical implications and is subject to a lot of controversy, but for the most part we will avoid those discussions and take it as given. In sum, we may define game theory as follows: Definition. Game theory is a systematic study of strategic interaction among rational individuals. Its limitations aside, game theory has been fruitfully applied to many situations in the realm of economics, political science, biology, law, etc. In the rest of this chapter we will illustrate the main ideas and concepts of game theory and some of its applications using simple examples. In later chapters we will analyze more realistic and complicated scenarios and discuss how game theory is applied in the real world. Among those applications are firm competition in oligopolistic markets, competition between political parties, auctions, bargaining, and repeated interaction between firms.

1.2

E XAMPLES

For the sake of comparison, we first start with an example in which there is no strategic interaction, and hence one does not need game theory to analyze. Example 1.1 (A Single Person Decision Problem). Suppose Ali is an investor who can invest his $100 either in a safe asset, say government bonds, which brings 10% return in one year, or he can invest it in a risky asset, say a stock issued by a corporation, which either brings 20% return (if the company performance is good) or zero return (if the company performance is bad). State Good Bonds Stocks 10% 20% Bad 10% 0%

Clearly, which investment is best for Ali depends on his preferences and the relative likelihoods of the two states of the world. Let’s denote the probability of the good state occurring p and that of the bad state 1 − p, and assume that Ali wants to maximize the amount of money he has at the end of the year. If he invests his $100 on bonds, he will have $110 at the end of the year irrespective of the state of the world (i.e., with certainty). If he invests on stocks, however, with probability

1.2. Examples

5

p he will have $120 and with probability 1 − p he will have $100. We can therefore calculate his average (or expected) money holdings at the end of the year as p × 120 + (1 − p) × 100 = 100 + 20 × p If, for example, p = 1/2, then he expects to have $110 at the end of the year. In general, if p > 1/2, then he would prefer to invest in stocks, and if p < 1/2 he would prefer bonds. This is just one example of a single person decision making problem, in which the decision problem of an individual can be analyzed in isolation of the other individuals’ behavior. Any uncertainty involved in such problems are exogenous in the sense that it is not determined or influenced in any way by the behavior of the individual in question. In the above example, the only uncertainty comes from the performance of the stock, which we may safely assume to be independent of Ali’s choice of investment. Contrast this with the situation illustrated in the following example. Example 1.2 (An Investment Game). Now, suppose Ali again has two options for investing his $100. He may either invest it in bonds, which have a certain return of 10%, or he may invest it in a risky venture. This venture requires $200 to be a success, in which case the return is 20%, i.e., $100 investment yields $120 at the end of the year. If total investment is less than $200, then the venture is a failure and yields zero return, i.e., $100 investment yields $100. Ali knows that there is another person, let’s call her Beril, who is exactly in the same situation, and there is no other potential investor in the venture. Unfortunately, Ali and Beril don’t know each other and cannot communicate. Therefore, they both have to make the investment decision without knowing the decisions of each other. We can summarize the returns on the investments of Ali and Beril as a function of their decisions in the table given in Figure 1.1. The first number in each cell represents the return on Ali’s investment, whereas the second number represents Beril’s return. We assume that both Ali and Beril know the situation represented in this table, i.e., they know the rules of the game. Figure 1.1: Investment Game. Beril Bonds Venture 110, 110 110, 100 100, 110 120, 120

A single person decision problem has no strategic interaction

Ali

Bonds Venture

The existence of strategic interaction is apparent in this situation, which should be contrasted with the one in Example 1.1. The crucial element is that the outcome of Ali’s decision (i.e., the return on the investment chosen) depends on what Beril does. Investing in the risky option, i.e., the

6

Introduction

venture, has an uncertain return, as it was the case in Example 1.1. However, now the source of the uncertainty is another individual, namely Beril. If Ali believes that Beril is going to invest in the venture, then his optimal choice is the venture as well, whereas, if he thinks Beril is going to invest in bonds, his optimal choice is to invest in bonds. Furthermore, Beril is in a similar situation, and this fact makes the problem significantly different from the one in Example 1.1. So, what should Ali do? What do you expect would happen in this situation? At this point we do not have enough information in our model to provide an answer. First we have to describe Ali and Beril’s objectives, i.e., their preferences over the set of possible outcomes. One possibility, economists’ favorite, is to assume that they are both expected payoff, or utility, maximizers. If we further take utility to be the amount of money they have, then we may assume that they are expected money maximizers. This, however, is not enough for us to answer Ali’s question, for we have to give Ali a way to form expectations regarding Beril’s behavior. One simple possibility is to assume that Ali thinks Beril is going to choose bonds with some given probability p between zero and one. Then, his decision problem becomes identical to the one in Example 1.1. Under this assumption, we do not need game theory to solve his problem. But, is it reasonable for him to assume that Beril is going to decide in such a mechanical way? After all, we have just assumed that Beril is an expected money maximizer as well. So, let’s assume that they are both rational, i.e., they choose whatever action that maximizes their expected returns, and they both know that the other is rational. Is this enough? Well, Ali knows that Beril is rational, but this is still not enough for him to deduce what she will do. He knows that she will do what maximizes her expected return, which, in turn, depends on what she thinks Ali is going to do. Therefore, what Ali should do depends on what she thinks Beril thinks that he is going to do. So, we have to go one more step and assume that not only each knows that the other is rational but also each knows that the other knows that the other is rational. We can continue in this manner to argue that an intelligent solution to Ali’s connundrum is to assume that both know that both are rational; both know that both know that both are rational; both know that both know that both know that both are rational; ad infinitum. This is a difficult problem indeed and game theory deals exactly with this kind of problems. The next example provides a problem that is relatively easier to solve. Example 1.3 (Prisoners’ Dilemma). Probably the best known example, which has also become a parable for many other situations, is called the Prisoners’ Dilemma. The story goes as follows: two suspects are arrested and put into different cells before the trial. The district attorney, who is pretty sure that both of the suspects are guilty but lacks enough evidence, offers them the following deal: if both of them confess and implicate the other (labeled C), then each will be sentenced to, say, 5 years of prison time. If one confesses and the other does not (labeled N), then the “rat” goes

Examples 7 free for his cooperation with the authorities and the non-confessor is sentenced to 6 years of prison time. We can compactly represent this story as in Figure 1.  What do you think would happen if players could sign binding contracts? Even if you are convinced that there is a genuine dilemma here. if player 2 is convinced that player 1 will keep his end of the bargain by playing N. 0 N 0. we are back to the dilemma. Surely. A similar interpretation applies to player 2.C). Finally. Player 2 C Player 1 N C −5. (N. The same reasoning for player 2 entails that this player too is very likely to play C. if neither of them confesses. But even if such a verbal agreement is reached prior to the actual play of the game. it is better for her to play C. N) is a better outcome for him than (N.C). N) is preferred by both players to (C. It is really a pity that the rational individualistic play leads to an inferior outcome from the perspective of both players. you may be wondering why . they would realize that (N.1. therefore.C) in which both players confess to their crimes. the best outcome for the player 1 is the case in which he confesses and the player 2 does not. −1 For instance. it seems only “rational” for player 1 to play C by confessing.2 where each number within each cell is the number of free years that will be enjoyed by each prisoner in the next six years. You may at first think that this situation arises here only because the prisoners are put into separate cells and hence are not allowed to have pre-play communication. you may argue. The next best outcome for player 1 is (N. after all. So. and (C. then both suspects get to serve one year.C) is a better outcome for him than (N. N) is superior relative to (C. and thus agree to play N instead of C.C) for both of them. −5 −6. And this is the dilemma: wouldn’t each of the players be strictly better off by playing N instead? After all.2. Thus. How would you play this game in the place of player 1? One useful observation is the following: no matter what player 2 intends to do. −6 −1. N). and may thus choose to betray before being betrayed by playing C. and then (C. This is so because (C. if the players debate about how to play the game. both players may reasonably fear betrayal.C) and finally (N. that the game will end in the outcome (C.C). even if such an agreement is reached. playing C yields a better outcome for player 1.2: Prisoners’ Dilemma. Figure 1. what makes player 1 so sure that player 2 will not backstab him in the last instant by playing C. N). A very reasonable prediction here is.

Indeed. price competition. One important area of research in game theory is the analysis of environments. In a repeated interaction. We find in prisoners’ dilemma a striking demonstration of the fact that the classical claim that “decentralized behavior implies efficiency” is not necessarily valid in environments with genuine room for strategic interaction. first note that the “story” of the prisoners’ dilemma is really only a story. The dilemma presented above correspond to far more realistic scenarios. but nobody finds it in her self-interest to act cooperatively. Considering that one of the main claims of the neoclassical economics is that selfish pursuit of individual welfare yields efficient outcomes (the famous invisible hand). The farmer who sets the lowest price captures the entire market. and this leads to a less desirable outcome. which actually sustain cooperation in the face of such seemingly hopeless situations as the prisoners’ dilemma. The upshot is that there are instances in which the interdependence between individuals who rationally follow their self-interest yields socially undesirable outcomes.8 Introduction we are making such a big deal out of a silly story. and economists do take it very seriously. institutions. this observation is a very important one. 2 Farmer A L H This example paints a very grim picture of human interactions. Prisoners’ dilemma type situations actually arise in many interesting scenarios. The common element in all these scenarios is that if everybody is cooperative a good outcome results. Well. 4 2. and norms. each player may induce cooperation by the other player by adopting a strategy that punishes bad behavior and rewards good behavior. As an example consider the pricing game in a local wheat market (depicted in Figure 1. whereas if they set the same price they share the market equally. such as arms-races. dispute settlements with or without lawyers. For example. 1 4. 0 0. each player has to take into account not only what is their payoff in each interaction but also how the outcome of each of these interactions influences the future ones. Farmer B L H 1.3) where there are only two farmers and they can either set a low price (L) or a high price (H). consider a repetition of the Prisoners’ Dilemma game. etc.3: Pricing Game. . many times we observe cooperation rather than its complete failure. We will analyze such repeated interactions in Chapter 9. Figure 1. Just to illustrate one such scenario.

then he is better off waiting and jumping after. in the game of chicken. even though Jim jumps.1. if he thinks Buzz is going to wait him out.5 (Entry Game). Jim and Buzz are to drive toward the cliff. 0 Jim B A The likely outcome is not clear. Example 1. For example.4 (Rebel Without a Cause). 2 1. The situation can be represented as in Figure 1. In a war of attrition game. Finally. the first person to jump from his car is declared the chicken whereas the last person to jump is a hero and captures Judy’s heart. they survive but lose Judy. In some situations.4: Game of Chicken. We model such situations by using what is known as Strategic (or Normal) Form Games. If Jim thinks Buzz is going to jump before him. and the one who concedes first loses. A).1 Game of chicken is also used as a parable of situations which are more interesting than the above story. he better jumps before: he is young and there will be other Judys. He cannot jump. the latter survive and gets Judy. Buzz’s gang members gather by a cliff that drops down to the Pacific Ocean. Take for example the Entry Game depicted in Figure 1. both players try to wait each other out. and Buzz compete for Judy. Each player has two strategies: jump before the other player (B) and after the other player (A). played by Natalie Wood. In the classic 1955 movie Rebel Without a Cause. In the movie Buzz’s leather jacket’s sleeve is caught on the door handle of his car.5. In this game Pepsi (P) first decides whether to enter a market curently monopolized 1 In real life. whereas the former gets to live.2. . On the other hand. Buzz B A 2. Examples 9 Example 1. they die an honorable death. players observe at least some of the moves made by other players and therefore this is not an appropriate modeling choice. B). but without Judy. Therefore. Figure 1. In all the examples up to here we assumed that the players either choose their strategies simultaneously or without knowing the choice of the other player. if both choose to jump after the other (A. If they jump at the same time (B. If one jumps before and the other after. James Dean killed himself and injured two passengers while driving on a public highway at an estimated speed of 100 mph. Both players desire to be the last to take that action. however. Jim. There are dynamic versions of the game of chicken called the war of attrition.4. two individuals are supposed to take an action and the choice is the timing of that action. the action is to jump. 3 3. played by James Dean. 1 0. Both cars and Buzz plunge over the cliff.

and three legislators. Another interesting application of game theory. Do you think this will be the outcome? Well. A will be the winner in the first round. They first vote between A and B. we assumed that Pepsi prefers entering only if Coke is going to acquiesce. The voters’ rankings of the alternatives are given in Table 1.6 (Voting). and Coke prefers to stay as a monopoly.1. suppose that there are two competing bills. and it will also win against the status-quo in the second round. voters 1. and can be represented by a game tree as we have done in Figure 1. or acquiesce (A). As a simple example. Such games of sequential moves are modeled using what is known as Extensive Form Games. who are to vote on these bills.5: Entry Game P Out 0. voter 3 is not very happy about the outcome and has another way to vote which would make him . 2 F −1. 0 In C Introduction Table 1. In this example. 2 and 3.1: Voters’ Preferences voter 1 A S B voter 2 B A S voter 3 S A B by Coke (C). First note that if each voter votes truthfully. to political science this time. and then between the winner of the first stage and the status-quo.5. for example. After observing Pepsi’s choice Coke decides whether to fight the entry (F) by. but if entry occurs it prefers to acquiesce. The voting takes place in two stages. denoted S.10 Figure 1. is voting. hence the payoff numbers appended to the end nodes of the game. A and B. price cuts and/or advertisement campaigns.  What do you think Pepsi should do?  Is there a way for Coke to avoid entry? Example 1. 4 A 2.

120 Crazy (1 − p) Ali Bonds Venture . We represent the new situation in Figure 1.7: Investment Game with Incomplete Information Beril Bonds Venture 110. Since she likes A better than S she would like to do that.7. in all the examples. If B wins in the first round.6: Voting Game A First Round B Second Round S Second Round A 11 happier.) Example 1. Could this be the outcome? Well. 110 120. B will lose to S in the second round and voter 3 is better off. however. in the first round. This means that. 100 100.2. it will also win in the second round. Such games are known as Games with Incomplete (or Private) Information. they have nothing to gain and possibly something to lose by voting for a less preferred option. We can analyze the situation more systematically starting from the second round. Reality. etc. she can vote for B. such as the identity and preferences of other players.1. which we modify by assuming that Ali is not certain about Beril’s preferences. now voter 2 can switch her vote to A to get A elected in the first round which then wins against S.2. voter 1 and 2 will vote for A and eventual outcome will be A. by voting between A and B in the first round they are actually voting between A and S. including the preferences of the other players. we have assumed that every player knows everything about the game. 120 100. each voter should vote truthfully. Examples Figure 1. 110 110. In particular. if A is the winner of the first round. rather than A. In the second round. strategies available to us and to other players. 120 Normal (p) Beril Bonds Venture 110. So far. As an illustration. let us go back to Example 1. Therefore. and with probability 1 − p he believes Beril is a little crazy and has some inherent tendency to take risks. is not that simple. assume that he believes (with some probability p) that Beril has the preferences represented in Figure 1.7 (Investment Game with Incomplete Information). 110 110. 110 120. In many situations we lack relevant information regarding many components of a strategic situation.6. however. even if they are unreasonable from the perspective of a rational investor. (see Figure 1. Therefore. which would make B the winner in the first round.1. Assuming that the other voters keep voting truthfully. Figure 1. the outcome will be S.

players did not have a chance to observe others’ behavior and possibly learn from them. and hide the bad ones. irrespective of her preferences. In certain strategic interactions this is not the case. that Ali takes Beril out on a date. to constantly try to come up with jokes that will impress her. However. we would expect the solution to be investment in the venture for both players if Ali’s belief that Beril is crazy is strong enough. So. education. unless of course. has the following expected return for Ali p × 100 + (1 − p) × 120 = 120 − 20p which is bigger than $110 if p < 1/2.9 (Hostile Takeovers). she tries to screen good candidates). In other words. then his choice would be clear: he should choose to invest in the venture. it was a failure from the very beginning. has to figure out which signals she should take seriously and which ones to discount (i. This is also the case when you go out on a date with someone for the first time. Investing in bonds yields $110 for Ali irrespective of what Beril does. Suppose. Example 1.8 (Signalling). When you apply for a job. The employer. manners etc.” a term used to describe takeovers that are implemented against the will of the target company’s manage- . even if the date is not going further than the dinner? Example 1. for example. dress. you try to impress your prospective boss with your resume. In essence. for example. while a dumb Ali prefers to be quite and just enjoys the food. and particularly so if one is dumb. there is a complex interaction of signalling and screening going on. you try to signal your good qualities. So. How small should p be for the solution of this game to be both Ali and Beril. Many of the acquisitions took the form of “hostile takeovers.7 one of the players had incomplete information but they chose their strategies without observing the choices of the other player. It is just stressful. on the other hand. Each person tries to convey their good sides while trying to hide the bad ones.12 Introduction If Ali was sure that Beril was crazy. During the 1980s there was a huge wave of mergers and acquisitions in the Uniter States. she wants to marry a smart guy and does not know whether Ali is smart or not. she thinks he is smart or dumb with equal probabilities. However. However. investing in the venture? Suppose that “normal” Beril chooses bonds and Ali believes this to be the case. with your behavior. In other words. Beril is going to decide whether she is going to have a long term relationship with him (call that marrying) or dump him. however. do you think it is more likely for a smart Ali to marry Beril by being funny. In Example 1. Investing in the venture. the employer is not exactly sure of your qualities. What do yo think will happen at the end? Is it possible for a dumb version of Ali to be funny and marry Beril? Or. Ali really wants to marry her and tries to show that he is smart by cracking jokes and being funny in general during the date. being funny is not very easy.e. Figure 1.8 illustrates the situation.

if 75% of the shares are tendered. and $90 for the remainder. 1 ment.1. 0 smart p p p p p p p bGod B p p p p p p p p r −1. and thus . however.” Such was the case in 1988 when Robert Campeau made a tender offer for Federated Department Stores. For example. Examples Figure 1. 1 13 quite quite 0.2. All shares. Campeau pays $105 to the first 50% and pays $90 to the remaining 25%. are bought at the average price of the total shares tendered. if s percent of the shares are tendered the average price paid by Campeau.8: Dating Game 2. the acquirer publicly offers a price to all the shareholders.e. The average price that Campeau pays is then equal to p = 105 × = 100 50 25 + 90 × 75 75 In general. the shares that were not rendered are worth $90 each. 1     marry     A funny r r   pd p p d p pdump d p p dr d −1. i. Some of these tender offers were in the form of what is known as “two-tiered tender offer. Suppose that the pre-takeover price of a Federated share is $100. 0   p p p p p p p B pp p p p p p p r 2. Let us consider a simplified version of the actual story. They usually take the form of direct tender offers to shareholders. 0 dumb p p     pmarry p   p p   p r   pr A funny d d dump d dr d −3. Campeau offers to pay $105 per share for the first 50% of the shares. 0 p d d marry pp p d p d pp dpr       dump r    r 1. 1 r d d marry d d dr  pp   pp   dump pp p   r 0. If the takeover succeeds..

We can either analyze each case separately or we may try to find general principals that apply to any game. as we illustrate in Table 1.e. game theory provides tools to analyze strategic interactions.  If you were a Federated shareholder. To fix ideas. however. this looks like a good deal for Campeau. We will analyze games along two different dimensions: (1) the order of moves. Federated filed for bankruptcy and Campeau lost his job.3 O UR M ETHODOLOGY So. As we have mentioned before.93 billion. would you tender your shares to Campeau?  Does your answer depend on what you think other shareholders will do?  Now suppose Macy’s offers $102 per share conditional upon obtaining the majority. s = 100. This gives us four general forms of games. At this point one has two options. we will discuss applications of these abstract concepts to particular cases which we hope you will find interesting. 1. then Campeau pays $97. is given by  105 p= 105 × 50 + 90 × s−50 s s Introduction if s ≤ 50 if s > 50 Notice that if everybody tenders. we have seen that many interesting situations involve strategic interactions between individuals and therefore render themselves to a game theoretical study. Campeau financed 97 percent of the purchase price with debt. In other words. (2) information. but only if sufficiently high number of shareholders tender. So. Campeau finally won out (not by a two-tiered tender offer. however) but paid $8.2.14 the price of a tendered share.17 billion for the stock of a company with a pre-acquisition market value of $2. Macy’s joined the bidding and this increased the premium quite significantly. . throughout this course we will analyze abstract games. which may then be applied to any arbitrary game-like situation. and suggest “reasonable” outcomes as solutions to those games.5 per share which is less than the current market price. Less than two years later.. What would you do? The actual unfolding of events were quite unfortunate for Campeau. i.

Our Methodology 15 Table 1.7 Extensive form Games with Incomplete Information Example 1.5 Incomplete Bayesian Games Example 1.2 Extensive form Games with Complete Information Example 1.1.2: Game Forms Information Complete Simultaneous Moves Sequential Strategic Form Games with Complete Information Example 1.8 .3.

  .

             .

                 !" # #"   "$  "  "          ## #  "   "  "  # "  !   #  #   "    "! ## %"  " #   &    "       ## #  # "    " !#!  # # "    #   #    #    !   !  #  '   !  # $"   ("  #)      .

 '   #  "       .

 " !  "      '  "   " # ! ." "  #   ."  " / % .       "    "           ! !   ""         !# $  #  "    %  &      *        +            "        . !  !." "   / #     0 #   " "  "    . " "  "   .

& 1  " "     #)        !                                  !   " )          #"         ! !"   #     "   ##    #  #"          *                2 "  #"       "      "#   """ "  3                # #      "        "     "         ""      2     # #      " "          " ""       " ! 4 .

 #   '  "     0 #  ! " ##   !! 2 5   4   " % . .      # " #     ""   # ! !  ! " #"      !  ! #"    +  #       !                      "!  !   #2"                   ##   "   " .

&  6    # #  !    "       !#  %.

.

& %"    "       !#  % & "      "       !#  %.

%    & "   "    ""         !  !"     "   "  "   !" # #" !          "       !  " "      !"  !      #   ##   "   <    !  "       ""    #  !  !"    !       ! %         " #    " &     "    ! +   "       !   ! #     "       !   ! #   =          "      =    ""    "    6             !     !  .!  !" .    #   &       . 9 !.9   !  !"      $    #    "     0 #  3 "      "      4 %    & "    #  . .   #    "     ## 3 "    !  #      %        # $"   "  *           "   )   "     :   "     . && 3  "     7! "   " ## " "     #     8!   #)    !  !"             %    .

"        +      " !   "       #   2 "      !  #)#   "  !"  !  "   3  "     ""     #   ! #" "    "                ""       "  !   !  !"    #  ")" $  "  $ "    .

      .

       .

               .

.

                         .

.

      

 !""#$% &  


%   

 .

 '%  ()*+#  > .

  %4&      %& ?"     0 #  "   # 9 " #   #"    #   $ .

 .

4 4 @ 5   5 @ A A B     .

       .

 

 

   #  .

.

     .

      #     # #   $ .

.

   .

 .

          !      .

 %.&      % & .

 #   ! "*  ##  !!    #     ! C  4    !       .    !     %""#!    $        4    #  . B                     #            %" !  ""            !   "   &      0 #   =8      $   !           ! ""    8     !#         #   !  "  +   #"   "   !    ""D +   ""D %>&       !  % !& . 4 5 5   5 5 4 .   #  ! +         .9   !  !"       !       .   6 "     !& B  "           !      # ! !6    ! . #.

 #  ## "   "   % &   % !& 3  "" #  "  "  2  7    #   "#  " "  2  4 #   "# E .

 8         #  ! ! #    #   $  4 4 5 5   5 5 . .   " "  2  .  !     !  #   " "      # "  2"    "" !    '        #"    !  # "   # # #     ""   #   %E& "  #$  !  %"$ & .

    &       " "  "    "    #    " "    #   .9   !  !"       ! $*  8  5 4 4 5  8 4 5 5 4     $"   #  " "   !  !"  # #  $  " #    " # "#  F    .  !  ! = " #!  ""  # $       CF # . . #   ! "*     "!    #           =  #      #    " !     # #   !  %&   %8&  "              "!    # #       %  4  "  &    #    %  .

1  %9  &                !  "# * %&       "     "  %&                          $"!  "    %   9   "      #!   0 " )$#&   2        $"!# %"           "   2    "    "& '  "  CF  .

.

     .

.

  .

      .

5 8 4 5 5 4 #  G ?  4 5 4 . 5 4  F 5 > 5 4 . "    #  =8 # G+   " B     " !   8  5 4 . 5 A .

 " !   !     " ! "   !          H"  " !  # #          +   7  "   $     " #        $  !     "# $    !  !"   $    #       #  !7 #           <  %A& %    " .

 #      !  %! & !#       ## "          "   !# #    < ### +     "   "  #" ! 8      $"   ) #"!  !#  "  "    )                  " #2  " #)#      #   +  #    "    )   .9   !  !"     * %&     %&     .

             )   #"!     "              #     #"  ) "!       %&               "          #  "   !               '     " ")" # #       ##   )   # #"            #     ##  !                   #  ! !                ")" <   .

  ! "  )  #   "  #"      $"#    !    #   ! ! )  +    #       #     2 "  )    #  *                          @ .

9   !  !"   " %     "    "    ! # &    !     . "            "# .

 #         "  " "  %  ) # !#  "   ) " " )   #   ! !     & #     %     ) #  &  )0 " #    #  "   #     !   #  #"   "     " +    "   $   !  !"  =          <       !" ! I .

            !" !   #"#    "     ! " "           " ##     "(" !#!  " ""     #           =   "   #H"  #      "          !  "    ##      +   "  # "# "       0 #  3##  !  "  "  " ! .

 !  # 3          .  !  CF   4 %#      .       4     0 #      .&* " ! . .

 +  7 " 9) "  !     .

.

        " "   9   !  !"  #           ' "     #   "               #        .

    !   )   "        !    + #     " )  "     '   !         .

 1  !  !"   "     .

 "                         #            3    .

                          .

 1  !  !"   "      .

     #  "    3  " #    .

   "  #  "       # "    < +D     # !      "              # C #2      # " #             !6       !   J .

   "   !  # "     .

 <    !#          #   # "   ! ! % "  ##  " "   &  # #     ! <  ""  .

   .

 . " . . #  #)      # ! <  "  ##       #    "   "  #     CF   CF  .       !   !"   #)#     # " )  #  ##    ? "!  # .

.

> E > 3  !    "  # ! <        # ! <  !   " )  F ? F ! <   <     <  ""  " #  ##  $"    .  F . 4 5 . "       # ! <    "  # "       # "   $      # ! <  "   " " #  ?  . .    3   ##     % &  7 # #  <             ""        ! ! "  $ "   # "      ! !        %.

"       # ! <   =8 .

1 G+ # .

  !   ! & 3    #  ##           #"  +      "   #!   K .

 .   #   # " % "  " $  & !  !   #       .   #    "     "       ! "   2     "   ##     +   "  "   "  ## " '  . .    +  !#    .

    !  !"  # " #   "            +        .

                +      .

 .           " " =    #      !   ! %  # & "  ## "   !    " ! ! #   &    !   !  =   #0  "  "  ## "     !     $  "  ## "    '  (  )         #   "       "D + #!           !  # # #!   "$      " "  " #    . .                                    #    !    .  #   "  ## "  '   ! !           #   ##    "   "  ## "      !  !   !"            "  ## "       6   .

 .

   .

 %& " "  #     ! <  ""  '      #  $    "  # ! <  +              #         %C     & G  38F "      !   # ! <9  #   #  #" ""!     !       # "  #"     !    < " 3  "       .

  .

 45            .

.

.

.

      '     ' .         &  . ! *      &      +  *.

.

 . B"  !  #"    # 38F " #   " % G&      !  "  38F "   "    ) !  ""    3  $     "  "  #  #   ! )  ?  F #   B   #   #   !  "     . +  "   ) !   38F "        .   %  #&  " F   4    "# !   38F " #     !  G  4 5 4 .         .    4    " !        ! # "    ! F  "  ##   4 8  . . ##    #  C #2    38F "    ""        "       "  ## "     "         0 #            0 #  #   8  ""        " "# !  B   $    #"    ! " <  44 . 5 " #     # ! <     ?  "  ##   .  . % " G&    )  !   38F "    ?  #    4 ! .    ? #   ! 2"    G  4 5 4 . 5 4 F 5 > 5 4 .       CF  #"     C       #  38F " #    )  " "   =8 "       "  ## "   !* =8   #"     !    $  " #  !  G ?  4 5 4 . ! .    . F 5 > 5 4 =  .    !   # # "  .

)   .      "   .  # #"#           38F "   $   + " #   .

 #                                            %             # #   &   #"     "  ##  %  & 3  )  !   38F " "               .

 <           %  # "   #   "  #    #     ) "   "   38F "   "    "  ## "   )   & .

 <  !                                          B"   .

 #  #"                         = !            "             3! )        "        38F "  %  *   &   + #)    .

 .

 .

> E >  4  #    "           #   . 4 5 .   ?   "#   !   ## "    # ! " 4. F . %"& "     !   38F "  =           "   38F "   !        "#"   "#*  ! #   !   ## "      "   #      " ""         ## "  #    # ' $     ! !  ?  . .

 "      !#      .

5 G E 5 4 4 F E E .0 "  9 "   ## # "  "  # #    !!   #  7   "#"!     "   # #  !    3+F "   # #   ""  "  !  #     7  ""   ""           $    %#      "   #"#    &     3+F "   #!## &   $ . #    ! !  ?  > 4 . E   #    !   4  !    %  &        .   "  1   # #   .

 #  9   !" !  " "  "  !  4 # KKK 8      #                " )                    !       ""  !  "         '     )   #"#    #   !    $  38F " #  #    #"    #" 6  !        3+F "                  9#   !   " $"# @@@ =   ! !  @@@    ##  @@@   ## %  D& 8"       3+F " ##      "     =  ! "   #   !         ##   .

!   %! )    &  )#     "    3+F "       . .

.

%                               .

.

                       4> .

 #       $ .

5 4 4 . 5 4   > "  . 54 . 5 4 . 5 4 .   > "  4 . 5 4 . 5 5 4 .  . 5 4 . 5 4 4 . C  >    B  "       " "   "  #    9         " #!  '  "  4 "         "      !   "    #    "  #    #2   #   "   >  # $  ##   #"            " B          #     "    # #       # )  +        ! !    D     #     "    #  "   !  !"         % "  " ##       !!  " & # " #   ! $   *  . 5  .  5 4 .  . 5 5 4 . 5 4 5 4 . 5 4 . 5 5 4 . 4 . #  "     %#    4 .   > "   5 4 . 5 4 4 . 5 5 4 . 4E . # >&      "       ""         !      "       (      " !      "     "       %   "     #2     "   "     >     #"#   "    "  #" !                " 8     " !          >  !"                 "     !     !  * C  4 C  .  . 5 4 5 4 . 5 4 . 5 4 .

B   "        "   "   "    "   "     # "   > %.

& #    ."        " #   " # "# &         3+F "   ! B     #!  *   %4&    46 %.6 %>& #   >6 %E&   4 B"  3+F " #   "     "        "       " "      "   "   " ! 3 "  !" !        "   "#   > %     3+F " &    #          6       !   " #        %+ #       . #$" "D +   ##    "    D +   ##0          D F  !   #" "# !  3+F " D&       <      #    "" #  "   * " #  ! 38F " #  #2         "  #2 # D '       %.$ .

0 "  #   40 " G  !  " %F ?&     )    G %#  ? #  &  !  " %F &     #       "   3+F "   4A . E B   )     40 "    .9   !  !"  !   $  ?  > 4 . 5 G E 5 4 4 F E E .    D& B     #       #   ## "     '  " " #  .

New York University As we have mentioned in our …rst lecture.The Nash Equilibrium Levent Koçkesen. or possible actions. independent of the other players’ actions. they take the best available actions to pursue their objectives. X = Ai . and hence we can analyze that individual’s decision making problem in isolation from that of others. Ok. individual decision making boils down to solving the following problem: max u (x.e. requires a joint analysis of every individual’s decision problem. one of the assumptions that we will maintain throughout is that individuals are rational. If. µ. is the fact that what is best for one individual. in general. depends upon other individuals’ actions. The decision problem of an individual in a game can still be phrased in above terms by treating µ as the choices of other individuals whose actions a¤ect the subject individual’s payo¤. (such as a consumption bundle) of the individual. X denotes the set of possible actions available (such as the budget set). and u is the utility (or payo¤) function of the individual. the individual in question has an optimal action. Columbia University Efe A. and hence we could analyze the problem by only considering it from the perspective of a single individual. or determined as an outcome of exogenous chance events. of course). however. the decision making problem of player i in a game becomes ai 2Ai max ui (ai . What makes a situation a strategic game. or optimizing behavior.. If every individual is in a similar situation this leads to (weakly or strictly) dominant strategy equilibrium. Unfortunately. In other words. The second solution concept 1 . the only assumptions that we used to justify dominant strategy equilibrium concept was the rationality of players (and the knowledge of own payo¤ function. letting x = ai . i. a¡i ) : The main di¢culty with this problem is the fact that an individual does not. know the action choices of other players. µ denotes a vector of parameters that are outside the control of the individual (such as the price vector and income). In the previous section we have analyzed situations in which this problem could be circumvented. in general. in general. and µ = a¡i . many interesting games do not have a dominant strategy equilibrium and this forces us to increase the rationality requirements for individuals. This is not any di¤erent from the assumption of rationality. µ) x2X where x is the vector of choice variables. then rationality requires taking that action. determining the best action for an individual in a game. Remember that. In most of microeconomics. that you must have come across in your microeconomics classes. Therefore. a¡i . whereas in single decision making problems the parameter vector. is assumed to be known.

for example. depending on the order of elimination. iterated elimination of dominated strategies..e. A Beautiful Mind. Once every individual is acting in accordance with the Nash equilibrium. However. New York: MacMillan. an action pro…le (a¤ .e. i. :::. each player’s choice of action is a best response to the actions actually taken by his opponents. which overcomes some of the problems of the solution concepts introduced before. Nash was awarded the Nobel prize in economics in 1994 (along with John Harsanyi and Reinhardt Selten) for his contributions to game theory. It is the second element which makes this an equilibrium concept. no one has an incentive to unilaterally deviate and take another action. in a Nash equilibrium. It is in this sense we may regard Nash equilibrium outcome as a steady state of a strategic interaction. a¤ ) ¸ ui (ai . the Nash equilibrium concept. or di¤erent outcomes may arise as outcomes that survive IEWD actions. a2 ): 1 a2 2A2 ai 2A1 Therefore.1 As we have mentioned above. i. and sometimes more useful. More formally. a¤ ) for all ai 2 Ai : ¡i i ¡i holds for each player i: The set of all Nash equilibria of G is denoted N(G): In a two player game. the presence of interaction among players requires each individual to form a belief regarding the possible actions of other individuals. based on the notion of the best response corresponThe discovery of the basic idea behind the Nash equilibrium goes back to the 1838 work of Augustine Cournot. but also the (common) knowledge of other players’ rationality and payo¤ functions. This suggests. a¤ ) 1 2 a¤ 2 2 arg max u2 (a¤ .) The formalization and rigorous analysis of this equilibrium concept was not given until the seminal 1950 work of the mathematician John Nash. It required not only the rationality of each individual and the knowledge of own payo¤ functions. A Nash equilibrium of a game G in strategic form is de…ned as any outcome such that ui (a¤ . a¤ ) 1 n De…nition. we have the following de…nition: (a¤ . a¤ ) is a Nash equilibrium if the 1 2 following two conditions hold a¤ 2 arg max u1 (a1 . (Cournot’s work is translated into English in 1897 as Researches into the Mathematical Principles of the Theory of Wealth. New York: Simon and Schuster. Nash equilibrium is based on the premises that (i) each individual acts rationally given her beliefs about the other players’ actions. 1 2 . in this case we run into other problems: there may be too many outcomes that survive IESD actions. de…nition of Nash equilibrium. did just that.. For an exciting biography of Nash. and that (ii) these beliefs are correct.that we introduced. we refer the reader to S. Nasar (1998). we may say that. In this section we will analyze by far the most commonly used equilibrium concept for strategic games.

f : [0.2 D 0.o) and (o. B1 (a2 ) = fa1 2 A1 : u1 (a1 .2 We de…ne the best response correspondence of player i in a strategic form game as the correspondence Bi : A¡i ¶ Ai given by Bi (a¡i ) = fai 2 Ai : ui (ai .Rg and B2 (D) = fLg. Similar computations yield N(CG) = f(l. a¡i ) ¸ ui (bi .1 2.o)g. a2 ) ¸ u2 (b1 .Dg and B1 (R) = fDg.) So.: Mathematical Reminder : Recall that a function f from a set A to a set B assigns to each x 2 A one and only one element f(x) in B: By de…nition. (o. For any 2-person game in strategic form G.0 we have B1 (L) = fUg.3 1. The following examples will illustrate. Proposition B suggests a way of computing the Nash equilibria of strategic games. The following is an easy but useful observation. player 1’s best choice is to play some action in B1 (a2 ). 1] ¶ [0. 1] : x · yg is a correspondence. a¤ ) 2 N(G) if. that is).r)g and N(MW) = . in the game L M R U 1. B1 (M) = fU. Indeed. a¤ 2 B1 (a¤ ) and a¤ 2 B2 (a¤ ): 1 2 2 1 Exercise. in this game. f (x) is a singleton set for each x 2 A). then Proposition B tells us that all we need to do is to solve two equations in two unknowns to characterize the set of all Nash equilibria (once we have found B1 and B2 .m). We have N(BoS) = f(m. draw the graph of f . assigns to each x 2 A a subset of B. then f can be thought of as a function.2 0. B2 (o) = fog. (r. while B2 (U) = fM. when the best response correspondence of the players are single-valued.0 1. B1 (o) = fog.m) are not equilibrium points of BoS. we have (a¤ . a correspondence f from A to B. and in this case we write f : A ¶ B: (For instance. 1 2 and only if. Proposition B. on the other hand. a¡i ) for all bi 2 Ai g = arg max ui (ai . In particular. a2 ) for all b1 2 A1 g: For instance. and B2 (m) = fmg: These observations also show that (m. Prove Proposition B. a¡i ) : ai 2Ai (Notice that. for example.dence. 2 3 .l). Bi (a¡i ) is a set which may or may not be a singleton. 1] de…ned as f (x) = fy 2 [0.) In the special case where a correspondence is single-valued (i. Example.e. in a 2-person game. if player 2 plays a2 . B1 (m) = fmg. for each a¡i 2 A¡i .

the agreement (whatever it is) should be “self enforcing” in the sense that no player should have a reason to deviate from her promise if she believes that the other player will keep his end of the bargain. The set of Nash equilibrium is then the set of outcomes at which both players’ payo¤s are underscored. then what sort of an agreement would they be able to implement. no individual has an incentive to deviate from it unilaterally. which is expressed by underscoring her payo¤ at (o. as long as they remember how the actions of the previous players fared in the past and choose those actions that are better. and her best response to o is o.An easy way of …nding Nash equilibrium in two-person strategic form games is to utilize the best response correspondences and the bimatrix representation. each iteration being played between two randomly selected players. Consider a strategic interaction played between two players. Social Conventions..0 1.e.0 : o 0. If an outcome is not a Nash equilibrium. In the BoS game. If no binding agreement is possible between the players.m).1 0. f(m. mostly on an informal basis. You simply have to mark the best response(s) of each player given the action choice of the other player and any action pro…le at which both players are best responding to each other is a Nash equilibrium. then at least one of the players is not best responding. Even if players start with arbitrary actions. for example. If this process settles down to an action pro…le.m). For example. (o. where player 1 is randomly picked from a population and player 2 is randomly picked from another population. Let us assume that two players debate about how they should play a given 2-person game in strategic form through preplay communication. the best response of player 2 is to play m. i. Now imagine that this situation is repeated over time. Put di¤erently. We will now give a brief discussion of some of these motivations: Self Enforcing Agreements. an outcome which is not a Nash equilibrium lacks a certain sense of stability. Once it is reached. if any? Clearly. which is expressed by underscoring player 2’s payo¤ at (m. and thus if a convention were to develop about how to play a given game through time.o). m o m 2. and sooner or later a player in that role will happen to land on a better action which will then be adopted by the players afterwards. 4 .o)g:2 Nash equilibrium concept has been motivated in many di¤erent ways. the situation could be a bargaining game between a randomly picked buyer and a randomly picked seller. we would expect this convention to correspond to a Nash equilibrium of the game. given player 1 plays m. that is if time after time the action choices of players in the role of player 1 and those in the role of player 2 are always the same. any social convention must correspond to a Nash equilibrium. A Nash equilibrium is an outcome that would correspond to a self enforcing agreement in this sense.2 The same procedure is applied to player 1 as well. then we may regard this outcome as a convention.

1 : H 1. In other words. however. Consider two players playing the same game repeatedly. we can represent this game by the following bimatrix S H S 2. Focal points may also arise due to the optimality of the actions.H)g: Exercise. Hawk-Dove (HD) Two animals are …ghting over a prey.Focal Points. (2) the repeated game is di¤erent from the strategic form game that is played in each period and hence it cannot be used to justify its equilibrium. if a reasonable outcome of a game in strategic form exists. Catching a hare is less demanding and does not require the cooperation of the other hunter. It is not hard to imagine that over time their play may settle on an outcome. There are. there is no …ght and the latter gets the whole prey. or at least a hare. If they both act aggressively (hawkish) and get into a …ght. They can catch a stag only if they both remain alert and devote their time and energy to catching it. two problems with this interpretation: (1) the play may never settle down. Stag Hunt (SH) Two hungry hunters go to the woods with the aim of catching a stag. and H the action of catching a hare. then they get to share the prey without incurring any cost. Also suppose that each player simply best responds to the action choice of the other player in the previous interaction. they share the prey but su¤er the cost of …ghting.2 0. Example. it would not be reasonable to claim that any Nash equilibrium of a given game corresponds to an outcome that is likely to be observed when the game is actually played.S). and the cost of …ghting is c1 for the …rst animal (player 1) and c2 for the second animal (player 2).1 One can easily verify that N(SH) = f(S. If this happens. and Nash equilibrium is considered focal on this basis. then it has to be a Nash equilibrium outcome. Learned Behavior. Each hunter prefers half a stag to a hare. being a Nash equilibrium is a necessary condition for being a reasonable outcome: But notice that this is a one-way statement. such as the names of the actions. If one acts dovish and the other hawkish. Focal points are outcomes which are distinguished from others on the basis of some characteristics which are not included in the formalism of the model. If both act peacefully (dovish).) We will now introduce two other celebrated strategic form games to further illustrate the Nash equilibrium concept. (H. Those characteristics may distinguish an outcome as a result of some psychological or social process and may even seem trivial. Letting S denote the action of going after the stag. whichever of the above parables one may want to entertain. (More on this shortly. The prey is worth v to each player. 5 .0 1. it must possess the property of being a Nash equilibrium. So. they all seem to suggest that.

Given that …rm 2 produces Q2 2 [0. c2 are all non-negative and …nd the Nash equilibria of this game in each of the following cases: (a) c1 > v=2. the best response of …rm 1 is found by solving the …rst order condition du1 = (a ¡ c) ¡ 2bQ1 ¡ bQ2 dQ1 1 which yields Q1 = a¡c ¡ Q2 : (Second order condition checks since d u2 = ¡2b < 0:) But 2b 2 dQ1 notice that this equation yields Q1 < 0 if Q2 > a¡c while producing a negative quantity is not b feasible for …rm 1. we …nd that the unique Nash equilibrium of this game is µ ¶ a¡c a¡c ¤ ¤ (Q1 . to compute the Nash equilibrium all we need to do is to solve the following two equations: Q¤ = 2 a ¡ c Q¤ ¡ 1 2b 2 and Q¤ = 1 a ¡ c Q¤ ¡ 2: 2b 2 Doing this. Q2 ) = . (c) c1 < v=2. The answer is yes. let us entertain the possibility that …rms 1 and 2 collude (perhaps forming a cartel) and act as a monopolist with the proviso that 6 . c2 < v=2. we have ½ ¾ a ¡ c Q2 a¡c if Q2 · B1 (Q2 ) = ¡ 2b 2 b 2 and B1 (Q2 ) = f0g By using symmetry. : 3b 3b (See Figure 1. c2 > v=2. a=b]. c2 < v=2: We have previously introduced a simple Cournot duopoly model and analyzed its outcome by applying IESD actions.) Interestingly. (b) c1 > v=2. We will …rst …nd the best response correspondence of …rm 1. if Q1 · if Q1 > a¡c b a¡c : b Observe next that it is impossible that either …rm will choose to produce more than a¡c in b the equilibrium (why?). To prove this. ¡ Q1 2 . this is precisely the only outcome that survives the IESD actions. we also …nd B2 (Q1 ) = ( © a¡c 2b if Q2 > ª a¡c : b f0g. Therefore. by Proposition B. c1 . Let us now try to …nd its Nash equilibria.(1) Write down the strategic form of this game (2) Assume v. An interesting question to ask at this point is if in the Cournot model it is ine¢cient for these …rms to produce their Nash equilibrium levels of output. showing that the ine¢ciency of decentralized behavior may surface in more realistic settings than the scenario of the prisoners’ dilemma suggests. Consequently.

we de…ne a symmetric equilibrium of a symmetric game as a Nash equilibrium of this game in which all players play the same action. Q¤ ) = 1 2 Remark. µ µ ¶ ¶µ ¶ pro…ts of the monopolist 1 a¡c a¡c (a ¡ c)2 = a¡c¡b ) = 2 2 2b 2b 4b while (a ¡ c)2 : 9b Thus. Given the market demand.) For instance. Consequently. There is reason to expect that symmetric outcomes will materialize in symmetric games since in such games all agents are identical to one another. the objective function of the monopolist is U (Q) = (a ¡ c ¡ bQ)Q where Q = Q1 + Q2 2 [0. 2a=b]: By using calculus. (Q¤ . (a-c)/3b) (a-c)/2b B2 (a-c)/2b (a-c)/b a/b Q1 Figure 1: Nash Equilibrium of Cournot Duopoly Game the pro…ts earned in this monopoly will be distributed equally among the …rms.) Consequently.Q2 a/b (a-c)/b B1 Nash equilibrium ((a-c)/3b. in the Cournot duopoly game above. we …nd that the optimal level of production for this monopoly is Q = a¡c : (Since the cost functions of the individual …rms are identical. Formally. the equilibrium predicts that this will not take place in actuality. it 2b does not really matter how much of this production takes place in whose plant. Q¤ ) corresponds to a symmetric equilibrium. symmetric equilibria of symmetric games is of particular interest. while both parties could be strictly better o¤ had they formed a cartel. More 1 2 7 . (Do you think this insight generalizes to the n-…rm case?) 2 pro…ts of …rm i in the equilibrium = ui (Q¤ . (Note that this concept does not apply to asymmetric games.

suppose that G is a symmetric 2-person game in strategic form with a unique equilibrium and (a¤ . a¤ ) i ¡i ¡i for all ai 2 Ai such that ai 6= a¤ i holds for each player i: For example. given the other players’ actions. (M. 1 B 1. 0 M 0. ¡1 R 0.R).generally. an action pro…le a¤ is a strict Nash equilibrium if ui (a¤ . if the Nash equilibrium of a symmetric game is unique. a¤ ) 2 N(G): But then using the symmetry of G one may show 1 2 ¤ ¤ easily that (a2 . More formally. whereas the unique equilibrium of the following game. Indeed. is not strict L T ¡1. it is possible that at a Nash equilibrium a player may be indi¤erent between her equilibrium action and some other action. 1 ¡1. If we do not allow this to happen. both Nash equilibria are strict in Stag-Hunt game. we arrive at the notion of a strict Nash equilibrium. a1 ) is a Nash equilibrium of G as well. Since there is only one equilibrium of G. then this equilibrium must be symmetric. 0 8 . we must then have a¤ = a¤ : 2 1 2 Nash equilibrium requires that no individual has an incentive to deviate from it. In other words. a¤ ) > ui (ai . ¡1 : 0.

a¤ ) < u1 (a01 . we provide the proof for the 2-person case. Let G be a game in strategic form with …nite action spaces. If the iterated elimination of weakly dominated actions results in a unique outcome. say this player is the …rst one. but to derive a contradiction. then this must be a Nash equilibrium in …nite strategic games. 1) must be a Nash equilibrium of guess-the average game: 9 . then this outcome must be a Nash equilibrium of G:3 Proof. one of 1 2 1 2 = the players must not be best-responding to the other. but playing ¯ weakly dominates playing ® for both players. in fact this is the only Nash equilibrium of this game (do you agree?). Yet the following result shows that if IEWD actions somehow yields a unique outcome. suppose that (a¤ . there may exist a Nash equilibrium of a game which is not a weakly or strictly dominant strategy equilibrium. What is more interesting is that a player may play a weakly dominated action in Nash equilibrium. a¤ ) for some a01 2 A1 . For simplicity. Ds (G) µ Dw (G) µ N(G) for all strategic games G: For instance. 1 2 2 3 So. This observation can be stated in an alternative way: Proposition C. but it is possible to generalize the argument in a straightforward way. Proposition D.3 Here (®. (C. a¤ ) 2 N(G): Then. we should analyze the relations between them. for instance. and every weakly dominant strategy equilibrium is a Nash equilibrium.1 3. However.C) is a Nash equilibrium for PD. we have (2) u1 (a¤ . Exercise. We turn to such an analysis in this section. Formally. then this game must have a unique Nash equilibrium. It follows readily from the de…nitions that every strictly dominant strategy equilibrium is a weakly dominant strategy equilibrium. :::. (1. ®) is a Nash equilibrium. Show that if all players have a strictly dominant strategy in a strategic game. the BoS provides an example to this e¤ect.0 1. Thus.0 ¯ 0. Let the only actions that survive the IEWD actions be a¤ and a¤ . A Nash equilibrium need not survive the IEWD actions.The Nash Equilibrium and Dominant/Dominated Actions Now that we have seen a few solution concepts for games in strategic form. Here is an example: ® ¯ (1) ® 0.

In other words. 1 2 1 2 Without loss of generality. a¤ ) · u1 (a000 . One can also. there may be Nash equilibria which do not survive IEWD actions (The game given by (1) illustrates this point). then we contradict (2). otherwise = 1 2 1 2 1 1 we continue this way and eventually reach the desired contradiction since A1 is a …nite set by hypothesis. Furthermore. a2 ) 1 But a¤ is not yet eliminated. a¤ ) 2 N(G). by trivially modifying the proof given above show that if IESD actions results in a unique outcome.o). it is important that IEWD actions leads to a unique outcome for the proposition to hold. a¤ ) < u1 (a01 . and thus 2 u1 (a¤ . But how about the converse of this? Is it the case that a Nash equilibrium always survives the IESD actions. We again give the proof in the 2-person case for simplicity. Proof.m) and (o. so u1 (a01 . we continue as we did after (2) to obtain an 1 1 000 action a1 2 fa01 . ¥ 1 2 for each a2 2 A2 not yet eliminated. a¤ ) cannot be a Nash equilibrium. a¤ ) 1 2 2 so that (a¤ . a¤ ) 2 N(G). To obtain a contradiction. suppose that (a¤ . then that outcome must be a Nash equilibrium. a¤ ) · u1 (a00 . then a¤ 1 2 1 ¤ and a2 must survive the iterated elimination of strictly dominated actions.But a01 must have been weakly dominated by some other action a00 2 A1 at some stage of the 1 elimination process. a00 g such that u1 (a01 . assume that a¤ is eliminated before a¤ : Then. Proposition E. In contrast to the case with IEWD actions (recall Proposition C). a2 ) · u1 (a00 . ¥ However. u1 (a¤ . Since a¤ is never eliminated (by hypothesis). 10 . For example in the BoS game all outcomes survive IEWD actions. we then have 2 u1 (a01 . Let G be a 2-person game in strategic form. a contradiction. a¤ ): If a000 = a¤ we are done again. If (a¤ . yet the only Nash equilibrium outcomes are (m. but either a¤ or a¤ is eliminated at some iteration. a¤ ): 2 1 2 Now if a00 = a¤ . there must exist an 1 2 0 ¤ action a1 2 A1 (not yet eliminated at the iteration at which a1 is eliminated) such that. a2 ) < u1 (a01 . any …nite and dominance solvable game has a unique Nash equilibrium. a2 ) 1 for each a2 2 A2 not yet eliminated at that stage. the answer is given in the a¢rmative by our next result. Otherwise. even if IEWD actions results in a unique outcome.

it is important to understand its limitations. N(MW) = . (1) A Nash equilibrium may involve a weakly dominated action by some players. This is a troubling issue in that multiplicity of equilibria avoids making a sharp prediction with regard to the actual play of the game. The set of all undominated Nash equilibria of G is denoted Nundom (G): Example. then Nundom (G) = f(¯. no matter how small this probability is. BoS. However. 11 . This leads us to the notion of a mixed strategy which we shall talk about later in the course. Nundom (G) = N(G) where G = PD. The BoS and CG provide two examples to this e¤ect. (Question: Are all strict Nash equilibria of a game in strategic form undominated?) Exercise. (What do you think will be the outcome of BoS?) However. De…nition. ¯)g: On the other hand. For instance. sometimes preplay negotiation and/or conventions may provide a way out of this problem. The same equality holds for the linear Cournot model. Ask yourself if (®. (3) Nash equilibrium need not be unique. This leads us to re…ne the Nash equilibrium in the following manner. :::. Since it is rare that all players are “certain” about the intended plays of their opponents (even if pre-play negotiation is possible). Compute the set of all Nash and undominated Nash equilibria of the chairman’s paradox game. then it is. a¤ ) such that none of the a¤ s is a weakly dominated 1 n i action. We discuss some of these as the …nal order of business in this chapter. If G denotes the game given in (1). We observed this possibility in Proposition C.Di¢culties with the Nash Equilibrium Given that the Nash equilibrium is the most widely used equilibrium concept in economic applications. (2) Nash equilibrium need not exist.: Thus the notion of Nash equilibrium does not help us predict how the MW game would be played in practice. CG. But if either one of the players assigns a probability in her mind that her opponent may play ¯. it is possible to circumvent this problem to some extent by enlarging the set of actions available to the players by allowing them to “randomize” among their actions. ®) in the game (1) is a sensible outcome at all. the expected utility maximizing (rational) action would be to play ¯. You may say that if player 1 is “certain” that player 2 will play ® and vice versa. An undominated Nash equilibrium of a game G in strategic form is de…ned as any Nash equilibrium (a¤ . weakly dominated Nash equilibrium appears unreasonable.

50) is observed to surface.Preplay Negotiation. and hence. ² Play the game. 100] and ( xi . otherwise. for this game.” Focal Points. Thus. Consider the CG game and allow the players to communicate (cheap talk) prior to the game being played. Maybe we should learn to live with the fact that some games do not admit a natural “solution. in 12 . such an agreement is self-enforcing).) But how about BoS? It is not at all obvious which agreement would surface in the preplay communication in this game. even if an agreement on either (m. 100]2 : x1 + x2 = 100g: ² Well. The following will illustrate. (Caveat: But this is not an unexceptionable assumption . pure coordination games like CG can often be “solved” via preplay negotiation. there is no reason why players should not play r once this agreement is reached (i. So. (More on this shortly. and if their total demand exceeds $100 no one will get anything (the money will then go to a charity) while they will receive their demands otherwise (anything left on the table will go to a charity). Yet. ² Verify that the set of Nash equilibria of this game is f(x1 . Example. (A Nash Demand Game) Suppose that two individuals (1 and 2) face the problem of dividing $100 among themselves. We may formulate this scenario as a 2-person game in strategic form where Ai = [0. x2 ) = 0.m) or (o. It has been argued by many game theorists that the story of some games isolate certain Nash equilibria as “focal” in that certain details that are not captured by the formalism of a game in strategic form may actually entail a clear path of play.o) would be self-enforcing. if x1 + x2 · 100 ui (x1 . the predictions made on the basis of the Nash equilibrium are bound to be very weak. Notice that we are assuming here that money is utility. The reason is that agreement on the outcome (r. there are just too many equilibria here. when people actually played this game in the experiments. and what is more. any division of $100 is an equilibrium! Thus. in an overwhelming number of times the outcome (50.r) seems in the nature of things.what if the bargaining was between a father and his 5 year old daughter or between two individuals who hate each other?).r). They decide to use the following method in doing this: each of them will simultaneously declare how much of the $100 (s)he wishes to have. preplay negotiation does not help us “solve” the BoS. What do you think will be the outcome then? Most people answer this question as (r. x2 ) 2 [0. an assumption which is often useful.e.

B.1) through communication that takes place prior to play). :::.4 It is our hope that experimental game theory (which we shall talk about further later on) will shed light into the matter in the future. only a single instance of it. after all. how do people coordinate so well? For more examples of this sort and a thorough discussion of focal points.H}. As you would expect. London: Oxford University Press.r) they are both strictly better o¤.F. 4 13 . It is di¢cult to come up with a theory for it since it is not clear what is the general principle that underlies it. for once such an outcome has been somehow realized. the players would not have an incentive from deviating from it neither unilaterally (as the Nash property requires) nor jointly (as Pareto optimality requires). one can think of other scenarios with a focal equilibrium. this re…nement of Nash equilibrium delivers us what we wish to …nd in the CG: NPO (CG) = f(r. 2 Unfortunately.B.D. De…nition. The idea is that the players can jointly deviate from the outcome (1. Consider again the CG game in which we argued that preplay negotiation would eliminate the Nash equilibrium (1. a¤ ) such that there does not exist another equilibrium 1 n ¤ ¤ ¤ b = (b1 .r)g: As you might expect. then they both win. The above example provides.G. however.C.C. the Pareto optimal Nash equilibrium concept does not help us “solve” the BoS. A Pareto optimal Nash equilibrium of a game G in strategic form is any Nash equilibrium a¤ = (a¤ . If their lists do not overlap. for at the Nash equilibrium outcome (r. This suggests the following re…nement of the Nash equilibrium. the notion of a focal point is an elusive one. they lose otherwise.H with the proviso that player 1’s list must contain A and player 2’s list must contain H. what is going on here.E. bn ) 2 N(G) with ui (a¤ ) < ui (b¤ ) for each i 2 N: We denote the set of all Pareto optimal Nash equilibrium of G by NPO (G): A Pareto optimal Nash equilibrium outcome in a 2-person game in strategic form is particularly appealing (when preplay communication is allowed). an excellent reference is T.G.1). (How would you play this game in the place of player 1? Player 2?) What happens very often when the game is played in the experiments is that people in the position of player 1 chooses {A.D} and people in the position of player 2 chooses {E.F.this example. :::. Two players are supposed to partition the letters A. The Strategy of Con‡ict. Schelling (1960). 50-50 split appears to be a focal point suggesting that equity considerations (which are totally missed by the formalism of the game theory we have developed so far) may play a role in certain Nash equilibrium to be selected in actual play. for we have NPO (BoS) = N(BoS): Here is another game in strategic form with some sort of a focal point. (4) Nash equilibrium is not immune to coalitional deviations.

it is quite conceivable in this case that players 2 and 3 would form a coalition and deviate from (a. provided that players can communicate prior to play.0 if player 3 chooses U b -5.-5. Put di¤erently. the Nash equilibrium does not ensure that no coalition of the players will …nd it bene…cial to defect. we then mean the outcome (a1 . ®. 2. This is because the stability achieved by the Nash equilibrium is by means of avoiding only the unilateral deviations of each individual.-5.-2.D) = 0:) In this game we have NPO (G) = f(b. The Pareto optimal Nash equilibrium somewhat corrects for this through avoiding defection of the entire group of the players (the so-called grand coalition) in addition to that of the individuals (the singleton coalitions). Unfortunately. an ) 2 A: For each K µ N.-5 -5.0 0. A3 = fU. AK = £i2K Ai by de…nition).2. ®. It suggest that there is merit in re…ning even the Pareto optimal Nash equilibrium by isolating those Nash equilibria that are immune against all possible coalitional deviations.0 -2. we have N = f1. Indeed.1. Let A = £i2N Ai be the outcome space of an n-person game in strategic form. Here is a game in which the Pareto optimal Nash equilibrium does not re…ne the Nash equilibrium in a way that deals with coalitional considerations in a satisfactory way.7 ® ¯ a 1. 3g. a¡K ).0 (For instance. respectively. :::. and let (a1 . player 2 chooses columns and player 3 chooses tables. :::. ®.D) is rather unstable. we let aK denote the vector (ai )i2K 2 £i2K Ai . we need a …nal bit of Notation. ¯. U). ® ¯ a 1. ®.-5. 14 . D)g = N(G). 2 You probably see where the above example is leading to. (a. To introduce this idea formally.D) is a reasonable prediction for this game.-5. and we denote the set of all such pro…les by AK (that is.0 if player 3 chooses D b -5. this re…nement does not solve the problem entirely. it casts doubt on the claim that (a. Since this is clearly a self-enforcing agreement. Example.The fact that Pareto optimal Nash equilibrium re…nes the Nash equilibrium points to the fact that the latter is not immune to coalitional deviations. an ): Clearly. and a¡K the vector (ai )i2NnK 2 £i2NnK Ai : By (aK . Similarly.1.¯. In the following game G player 1 chooses rows. Dg and u3 (a.6 -5. but coalitional considerations indicate that the equilibrium (a. aK is the pro…le of actions taken by all players who belong to the coalition K.D) equilibrium by publicly agreeing to take actions ¯ and U.

we have NS (G) = f(b. 1-12.” Journal of Economic Theory. Since. At present. for all nonempty coalitions K µ N and all aK 2 AK . in the 3-person game discussed above. a¤ ) ¸ ui (aK . for it is simply too demanding to disallow for all coalitional deviations. in many interesting games no strong Nash equilibrium exists. 42. there exists 1 n a player i 2 K such that ¤ ui (a¤ . Clearly. 6 If you are interested in coalitional re…nements of the Nash equilibrium. “Coalition-proof Nash equilibria I: Concepts. Unfortunately. Peleg and M. A Strong Nash equilibrium of a game G in strategic form is any outcome a = (a¤ . while the notion of the strong Nash equilibrium solves some of our problems. there does not exist such a theory that is commonly used in game theory. and A¡K is a shorthand notation for the set A¡K = £i2NnK Ai : De…nition.r): On the other hand. the issue awaits much further research. for 2-person games the notions of Pareto optimal and strong Nash equilibrium coincide (why?). B. a good place to start is the highly readable paper by D. :::.a¡K is the pro…le of actions taken by all players who does not belong to K. Whinston (1987). a¤ ): ¡K K ¡K We denote the set of all strong Nash equilibrium of G by NS (G):5 While its formal de…nition is a bit mouthful. 5 15 . the only strong Nash equilibrium of the CG is (r.6 The notion of the strong Nash equilibrium was …rst introduced by the mathematician and economist Robert Aumann. In particular. all that the strong Nash equilibrium concept does is to choose those outcomes at which no coalition can …nd it in the interest of each of its members to deviate. ¯. Bernheim. What we need instead is a theory of coalition formation so that we can look for the Nash equilibria that are immune to deviations by those coalitions that are likely to form.U)g as is desired (verify!). we have NS (G) µ NPO (G) µ N(G) fort any game G in strategic form. however. it is itself not free of di¢culties. pp. a¤ ) such that.

Ok New York University Introduction In this section. the …rst-price sealed bid auction. In the descending-bid auction. and that bidder wins the object at the …nal price. we shall consider several economic scenarios which are modeled well by means of strategic games. In the ascending-bid auction. cars. pp. Most of these examples are the toy versions of more general economic models and we shall return to some of them in later chapters when we are better equipped to cover more realistic scenarios. We shall also examine the predictions that game theory provides in such scenarios by using some of the equilibrium concepts that we have studied so far. the descending-bid auction (also called Dutch auction). For a good introductory survey to the auction theory see Paul Klemperer (1999). the auctioneer starts at a very high price and lowers it continuously until the someone accepts the currently announced price. 13(3). Auctions Many economic transactions are conducted through auctions. 227-286. That bidder wins the object at that price. We contend that the best way of understanding the pros and cons of Nash equilibrium is seeing this concept in action. For this reason we shall consider below quite a number of examples. undoubtedly the most commonly used equilibrium concept in game theory. wars of attrition.Nash Equilibrium: Applications Prof. and houses are also sold by auctions. and the second-price sealed bid auction (also known as Vickrey auction2 ). The second-price sealed bid auction works the same way except that the winner pays the second highest bid. 2 Named after William Vickrey of Columbia University who was awarded the Nobel Prize in economics in 1996. foreign exchange. In the …rst-price sealed bid auction each bidder submits her bid in a sealed envelope without seeing others’bids. and more recently airwave spectrum rights via auctions. Art work.1 There are four commonly used and studied forms of auctions: the ascending-bid auction (also called English auction). Auction theory has also been applied to areas as diverse as queues. mineral rights. Efe A. “Auction Theory: A Guide to the Literature. antiques. July 1999.” Journal of Economic Surveys. Governments sell treasury bills. and lobbying contests. the price is raised until only one bidder remains. One major objective of this section is actually to establish a solid understanding of the notion of Nash equilibrium. 1 1 . and the object is sold to the highest bidder at her bid. Levent Koçkesen Columbia University and Prof.

. say). b2 . randomly selecting a winner (by means of coin toss. In fact. First-price sealed bid auction The rules of the …rst-price auction is such that after both players cast their bid (without observing each others’bid). identifying the precise nature of the outcomes in a setting like this (and in similar scenarios) under various procedures is the subject matter of a very likely sub…eld of game theory. this bargaining procedure results in the 2-person game in strategic form G = (A1 . In this section. 2). where v1 > v2 > 0: (What we mean by this is that player i is indi¤erent between buying the object at price vi and not buying it. (In case you are wondering why we are not checking for dominant strategy equilibrium. individuals are risk neutral). Our choice of the tie-breaking rule is useful in that it leads to a simple analysis. the object is awarded to player 1. of course. note that the following analysis will demonstrate that Ds (G) = Dw (G) = . 3 2 . There are other tie-breaking methods such as. u1 . let us then begin with analyzing this game theoretic scenario …rst under the most common auctioning procedure. namely the auction theory.3 Assuming that utility is money (i. For simplicity we will assume there are only two individuals.) Rather than computing the best response correspondences of the players. b2 ) = 0. + We now wish to identify the set of Nash equilibria of G. players 1 and 2. ( v1 b1 . not only because they are simpler to analyze but also because under the assumptions we will work with in this section the …rst-price sealed bid auction is strategically equivalent to descending bid auction and the second-price sealed bid auction is strategically equivalent to ascending bid auction. if b2 > b1 otherwise for all (b1 . otherwise and u2 (b1 . We …rst claim that (1) In any Nash equilibrium player 1 (the individual who values the object the most) wins the object. i = 1.In this section we will analyze the last two forms of auctions. While this may require a stretch of imagination.. i = 1. who are competing in an auction for a valuable object. Let us try to …nd what properties a Nash equilibrium has to satisfy. u2 ) where A1 = A2 = R+ . The reader should not …nd it di¢ cult to modify the results reported below by using other tie-breaking rules. it is commonly known that the value of the object to the player i is vi dollars. 2. In case of a tie. the highest bidder wins the object and pays her own bid. depends on the rules of the auctioning procedure. b2 ) 2 R2 : (Here bi stands for the bid of player i.) The outcome of the auction. we adopt here instead a direct approach towards this goal. if b1 b2 u1 (b1 .e. A2 . b2 ) = ( v2 0. our aim is to provide an elementary introduction to this topic.

on the other hand. bidding v2 is strictly better than bidding b02 if player 1’ bid is strictly smaller than 3 . then u2 (b1 . b2 ) = 0: Now if b2 v2 . then. s u1 (b1 . say. any Nash equilibrium (b1 . In either case. Verify the above conclusion by means of computing the best response correspondences of the players 1 and 2. player 1’ bid is strictly smaller than b02 s but greater than or equal to v2 . b2 ) of G player 1 obtains the object. b2 ) : v2 b1 = b2 v1 g Exercise. on the other hand. and player 1 in the second case). Therefore. is a strictly better response for player 1 when player 2 is bidding b2 : Therefore. so his action is optimal. and hence bidding. is zero. we obtain a contradiction to the hypothesis that (b1 . b2 and increase her payo¤ from v1 b1 to v1 b2 : Together with our …nding that b1 b2 . b2 ) of this game must satisfy v2 b1 = b2 v1 : Is any pair (b1 . So. We thus conclude that N(G) = f(b1 . b2 ) is an equilibrium. b2 ) be a Nash equilibrium. and plotting their graph in the (b1 . To see this. If. b2 ) space. suppose player 2 bids b02 which is strictly higher than v2 : Now. say b2 .then player 2 wins by bidding b02 but obtains a negative payo¤ since she pays more than her valuation. While N(G) is rather a large set and hence does not lead us to a sharp prediction. bidding a strictly smaller amount than b2 cannot be a best response for player 1. The inequality v2 b1 guarantees that player 2 does not wish to win the object when player 1 bids b1 . If. if player 1’ bid s 0 0 b1 is greater than equal to b1 . Thirdly. b2 ) that satisfy these inequalities an equilibrium? Yes. suppose player 1 does not win the object. then b2 < v1 (since v2 < v1 ). b1 ] is a pro…table deviation for player 2.e. that is. This implies that b1 < b2 and player 1’ payo¤ in equilibrium is zero. b2 > v2 . it is easily veri…ed that bidding anything strictly higher than v2 is a weakly dominated action for player 2. this implies that b1 = b2 must hold in equilibrium. for in this case player 1 would deviate by bidding. (2) b1 > b2 cannot hold in equilibrium. b2 v1 . b2 ) < 0 so that bidding anything in the interval [0. however. Indeed. s Similarly. i. (3) Neither b1 < v2 nor b1 > v1 can hold (player 2 would have a pro…table deviation in the …rst case. The inequality. b1 b2 : Secondly. but for a contradiction. guarantees that player 1 is also best responding. The payo¤ to bidding v2 . we conclude that in any equilibrium (b1 .Proof: Let (b1 . re…ning this set by eliminating the weakly dominated actions solves this problem. then player 1 wins the object and player 2’ payo¤ to b2 and to s bidding her valuation v2 are both zero. on the other hand.

otherwise and u2 (b1 . Bidding strictly greater than v1 . Player 2 bids v1 (that is.) + We now claim that Dw (G0 ) = f(v1 . A2 . Let us carefully examine Vickrey’ modi…cation. or she looses the object (if b1 < b2 < v1 ) with payo¤ zero. In case of a tie. the highest bidder wins the object and pays the bid of the other player. if b1 b2 u1 (b1 . b2 ) 2 R2 : (Contrast G0 with the game G we studied above. It is often argued that it would be desirable to design an auctioning method in which all players are induced to bid their true valuations in equilibrium. b2 ) = 0. the same payo¤ as she would get by bidding v1 : Case 2. s Second-price sealed bid (Vickrey) auction The rules of the second-price auction is such that after both players cast their bid (without observing each others’bid). u1 . this bargaining procedure results in the 2-person game in strategic form G0 = (A1 . the object is awarded to player 1. b2 = v1 ) 4 . v2 )g: To see that bidding b1 = v1 is a dominant action for player 1. v2 )g: Now. b1 0 0 b02 v2 0 v2 b1 < b02 b02 < 0 b1 < v 2 0 v2 b02 < 0 v2 b02 Consequently. we have Nundom (G) = f(v2 . if b2 > b1 otherwise for all (b1 . ( v1 b2 . (Does bidding v2 weakly dominate bids less than v2 as well?). u2 ) where A1 = A2 = R+ . bidding v1 is at least as good as bidding strictly less than v1 . b1 . Assuming that utility is money. So. on the other hand. and sometimes it is strictly better. b2 ) = ( v2 0. b2 < v1 ) In this case. But is such a thing possible? This question was answered in the a¢ rmative by the economist William Vickrey who has showed that truth-telling can be established even as a dominant action by modifying the rules of the auction suitably. brings player 1 a payo¤ of v1 b2 > 0.v2 : The following table summarizes this discussion. there is an intriguing normative problem with this equilibrium: the …rst player is not bidding his true valuation. we distinguish between two cases: Case 1. Player 2 bids strictly less than v1 (that is. by bidding v1 player 1 wins the object and achieves a utility level of v1 b2 > 0: Bidding strictly less than v1 either makes her win the object (if b2 b1 < v1 ) with payo¤ v1 b2 .

Case 3. By an ingenious modi…cation of the …rst-price auction. However. v2 )g as is sought. bidding v1 is a dominant action for player 1. there are other Nash equilibria of this game. Consequently. every bid brings player 1 a payo¤ of zero. b2 > v1 ) In this case. For example (v1 . A similar reasoning shows that bidding v2 is a dominant action for player 2. Generalize the above analysis by considering n 2 many individuals assuming that the value of the object to player i is vi dollars. Player 2 bids strictly more than v1 (that is. where v1 > > vn . and this even without knowing the true valuations of the individuals! Vickrey’ technique provides a foundation for the s theory of implementation which has important applications in public economics where one frequently needs to put on the mask of a social engineering. by designing the rules of interaction carefully. we also have that (v1 . 5 . and hence we have Dw (G0 ) = f(v1 . 0) is a Nash equilibrium too (verify). player 1 loses the object and obtains utility 0: So bidding v1 is again optimal for player 1 since winning the object in this case would entail negative utility for him. Since. Vickrey was able to guarantee the truthful revelation of the preferences of the agents in dominant strategy equilibrium. and that the winner pays the second highest bid in the second-price auction. We hope you agree that this is a very nice result. i = 1. one may force the individuals to coordinate on normatively appealing outcomes. therefore. v2 ) is a Nash equilibrium. a weakly dominant strategy equilibrium is also a Nash equilibrium. Exercise. n. that the object is given to the highest bidder with the smallest index in both the …rst and second-price auctions.In this case. We shall talk about some of these applications later on. This result shows that. :::.

ps ) = 0.. if pb > ps . not achieving a satisfactory level of realism. ps ) = ( pb 0. if pb > ps otherwise for all (pb . vb ]2 : We …rst observe that there is no Nash equilibrium (pb . but she knows that the seller will not sell the object at a price strictly less than vs ) while the seller wants to charge vb : The actual price of the object will thus be determined through the bargaining of the players. As . vb ]. then bidding anything between ps and pb (e. ps ) 2 [vs . vs . otherwise there is no trade. this bargaining procedure results in the 2-person game in strategic form (Ab . is in possession of an object that is worth vs dollars to him (that is.4 Bargaining Scenario 1: Sealed-bid …rst-price auction Each party proposes a price between vs and vb simultaneously (by means of a sealed bid). Indeed. Consider the game described above. vb ) Bb (ps ) = [vs . but rather to illustrate the use of Nash equilibrium in certain simple buyer-seller games. vb ]. 4 6 . if pb > ps ub (pb . If the price suggested by the buyer pb is strictly higher than that proposed by the seller. us ) where Ab = As = [vs . Show that the best response correspondence of players b and s are given as ( . we shall consider here two such scenarios. an e¢ cient state of a¤airs demand that trade takes place. To demonstrate this. But what is the price that player b will pay to player s? The buyer wants to pay only vs (well. ps ) in which the buyer b buys the object. however. In later chapters. pb =2 + ps =2) would be a strictly better response for player b (than playing pb ) against ps : Exercise. otherwise and us (pb . if ps = vb It is very likely that these scenarios will strike you as unrealistic. if ps 2 [vs .Buyer-Seller Games A seller. ub . say ps . Di¤erent bargaining scenarios would presumably lead to di¤erent equilibrium prices.call him player s. then trade takes place at price pb . The value of this object is vb dollars to a potential buyer. we will return to this setting and consider much more realistic bargaining scenarios that involve sequential o¤ers and countero¤ers by the players. she wants to pay nothing. player b: We assume in what follows that vb > vs > 0: So. since the value of the object is higher for the buyer than it is for the seller. ( vb pb . player s is indi¤erent between receiving vs dollars for the object and keeping the object). Assuming that utility is money.g. The objective of these examples is.

say ps . vb ]. vb ) (the no-trade equilibrium) or pb = ps 2 [vs . ub . one is able to generate many equilibria in which trade occurs. for. vs . vb ] [vs . if ps 2 [vs . in many instances.and Bs (pb ) = ( [vs . vb ) [vs . Deduce from this that the only equilibrium of the game is (pb . ps ) = 0. (This is an important observation for especially the seller. (Check if undominated and/or Pareto optimal Nash equilibria provide sharper predictions here. if pb = vs in this game. ps ) 2 [vs . vb ]. ( vb pb . vb ]. if pb = vs respectively. Therefore. ps ) 2 Bb (ps ) Bs (pb ) holds if. vb ): Bargaining Scenario 2: Modi…ed sealed-bid …rst-price auction Each party proposes a price between vs and vb simultaneously (by means of a sealed bid). otherwise and us (pb . ps ) = (vs . vb ]. vb ]2 : If you have solved the exercise above. with a minor modi…cation of the bargaining procedure. if pb ps ub (pb . (pb . Consequently. pb ]. either (pb . otherwise there is no trade. Assuming again that utility is money. pb ]. and only if. this bargaining procedure results in the 2-person game in strategic form (Ab .) However. the prediction of the Nash equilibrium in the resulting game is less than satisfactory due to the large multiplicity of equilibria. if pb 2 (vs . ps ) = (vs . us ) where Ab = As = [vs . As . it is the seller who design the bargaining procedure. if ps = vb [vs .) 7 . ps ) = ( pb 0. vb ] [vs . If the price suggested by the buyer pb is at least as large as that proposed by the seller. if pb ps otherwise for all (pb . vb ] (see Figure 2). you will …nd it easy to show that we have Bb (ps ) = and Bs (pb ) = ( ( fps g. then trade takes place at price pb . if pb 2 (vs .

that is. These assumptions entail that 8 > a Pi . However. we thus model the action space for …rm i = 1. we assume that …rms 1 and 2 share the market equally. Bertrand Duopoly with Homogeneous Products Consider the market structure underlying the linear Cournot model. The answer is easily seen to be yes.Price Competition Models Game theory has many applications in the …eld of industrial organization. Pi = Pj b > : 0. In case both …rms charge the same price. where Qi (P1 . as the argument goes. Recall that in this scenario individual …rms were modeled as competing in the market by choosing their output levels. P2 ) = 2 b . and complete the formulation of the model at hand as a 2-person game in strategic form. it is rather through price setting that they engage in competition with other …rms. Pi > Pj where j 6= i = 1. P2 ): If we assume that there is no qualitative di¤erence between the products of the two …rms. they choose how much to charge for their products. For. especially if one is interested in the short-run decision making of the …rms. 2. 2 as [0. P2 ). Pi < Pj <b b Pi 1 a Qi (P1 . 5 8 . which leads to a y dramatically di¤erent conclusion than does the Cournot model. An immediate question to ask is if our prediction (based on the Nash equilibrium) about the market outcome would be di¤erent in this model than in the linear Cournot duopoly model.5 To deal with this problem. P2 ) = Pi Qi (P1 . a]: The pro…t function of …rm i on [0. it would be natural to assume that the consumers always buy the cheaper good. P2 ) cQi (P1 . namely 2a c a c P1 = P2 = a b = + . recall that in the Nash equilibrium of the linear Cournot duopoly model both …rms charge the same price. We have already encountered one such application when we have considered in some detail the model of Cournot duopoly in the previous chapter. a]2 in this model (called the linear Bertrand duopoly) is de…ned as ui (P1 . We now brie‡ discuss such a price competition model. it has been argued in the literature that this model is not entirely satisfactory. but this time assume that the …rms in the market engage in price competition. 3b 3 3 The argument was …rst given by the French mathematician Joseph Bertrand in 1883 as a critique of the Cournot model. Recalling that a is the maximum price level in the market. several oligopoly models in which …rms choose their prices (as opposed to quantities) were developed in the literature. P2 ) denotes the output sold by …rm i at the price pro…le (P1 . in the short-run …rms would …nd it too costly to adjust their output level at will.. To see this.

This is a surprising result since it envisages that all …rms operate with zero pro…ts in the equilibrium. all …rms price their products at unit cost. it is always a good idea to inquire into the suitability of a speci…c tie-breaking rule in such models. c). for in this case …rm 1 would be making zero pro…ts. if …rm 1 undercuts …rm 2 by charging a marginally smaller price 3 c c than 2a + 3 . + 3 3 3 3 : Thus. P2 which will ensure positive pro…ts given that …rm 2’ price is P2 : How about P1 = P2 > c? s This is also impossible. given that P2 = 2a + 3 . if the price pro…le (P1 . the Cournot prices cannot constitute an equilibrium for the linear Bertrand model. for in this case …rm 2 would not be best responding. then the pro…t level of …rm 3 3 1 increases since this …rm then grabs the entire market. P2 ) = (P1 c)Q1 (P1 . P2 > P1 = c is not possible. (How did we conclude this. Can we have P1 > P2 = c then? No. and hence. because in this case either …rm can unilaterally increase its pro…ts by undercutting the other …rm (just as in the discussion above) contradicting that (P1 . really?) The problem is that the tie-breaking rule of the Bertrand duopoly introduces a discontinuity to the model allowing …rms to achieve relatively large gains through small alterations of their actions. however. it can increase its pro…ts by charging. By symmetry.so that …rm 1’ level of pro…ts is found as s ui (P1 . P1 =2 + c=2: Similarly. we predict precisely the same outcome in equilibrium with only two price setting …rms! Notice that this is the third time we are observing that the tie-breaking rule is playing an important role with regard to the nature of equilibrium. say. 6 9 . and this is indeed an equilibrium as you can easily check: in the Nash equilibrium of the linear Bertrand duopoly. P2 ) is a Nash equilibrium. say 2a + 3 " where " is some small positive number. it can easily be checked that "&0 lim ui 2a c + 3 3 ". P2 P1 > c is also impossible. P2 ) = (c. say. This is quite typical in many interesting strategic games. the equilibrium outcome here is nothing but the competitive equilibrium outcome which is justi…ed in microeconomics only by hypothesizing a very large number of …rms who act as price takers (not setters) in the industry.6 What then is the equilibrium? The analysis outlined in the previous paragraph actually brings us quite close to answering this question. In fact. P2 ) is a Nash equilibrium. Here. Thus. and hence we conclude that at least one …rm must be charging precisely its unit cost c in the equilibrium. P2 c: Is P1 > P2 > c possible? No. The only candidate for equilibrium is thus (P1 . and thus it would better for it to charge. First observe that neither …rm would ever charge a price below c as this would yield negative pro…ts (which can always be avoided by charging exactly c dollars for the unit product). Indeed. 2a c + 3 3 = 2 (a 9b c)2 > 1 (a 9b c)2 = ui 2a c 2a c + . we must have P1 . P2 ) = 2(a 3 c) 1 2 a b 1 b 2a c + 3 3 = 1 (a 9b c)2 : c But.

) 10 . given s the other’ action. It arises only because we take the prices as continuous variables in the classic Bertrand model. If the medium of exchange was discrete but small.) But this is not a severe di¢ culty. The major culprit behind the above …nding is the fundamental discontinuity that the Bertrand game possesses. it is possible in this game to alter one’ action marginally (in…nitesimally) and increase the associated pro…ts signi…cantly. For example. exceeds that of …rm 2.Remark. and often do not possess a s Nash equilibrium. Such games are called discontinuous games. we obtain an asymmetric Bertrand game that does not have an equilibrium. as noted earlier. if we modify the linear Bertrand model so that the unit cost of …rm 1. Indeed. call it c1 . (Exercise: Prove this. then there would exist an equilibrium of this game such that the high cost …rm 1 charges its unit cost (and thus make zero pro…ts) while the low cost …rm 2 would grab the entire market by charging the lowest possible price strictly below c1 : (Challenge: Formalize and prove this claim.

1] by looking at the distance between this point and her ideal point. The political spectrum is of course better modeled as being multidimensional. 8 Once again this is not the most realistic of assumptions.7 Let us assume next that each voter has an ideal position in the political spectrum. then we can identify the set of all political positions with the closed interval [0. social security. The interpretation of any other point in [0. care also about the policy that will be implemented in equilibrium. 1]: Here we may think of 0 as the most leftist position and 1 as the most rightist position. and hence. We consider the case in which there are n 2 f2. and so on. it would certainly be reasonable to posit that the candidates have policy preferences on their own. 3g many candidates whose problem is to decide upon which policy to propose (or. and the tie for the …rst place to losing the election. Allowing for multidimensionality in voting models. We assume that the only goal of each candidate is to win the election.Spatial Voting Games The example of Chairman’ paradox was our …rst excursion into voting theory which s provides an extensive realm for fruitful applications of game theory. 1] likes the point y better than z i¤ jx yj < jx zj : Such preferences are called single-peaked in the literature because. that is. 1]: Thus 1/2 corresponds to the median ideal position in the society. complicates matters to a considerable degree. It is by far the most standard in the literature. Each citizen votes for the candidate who has chosen the closest position to her ideal point (because she has single-peaked preferences). In this section we provide a more daring excursion. in the national elections voters not only care about the position of a candidate on the heath care reform but also about his/her position on the tax policy. the ideal positions of exactly half of the society lie to the left of 1=2: The players in a voting game are the political candidates or parties. If we think of the policy space as one-dimensional. For instance. 1]: (Plot the graph of this mapping on [0. the voter with the ideal point 1=2 2 [0. for instance. and evaluate every other policy in [0. education reform. 1] is then given in the straightforward way. and posit that the voters (which can be identi…ed with their ideal positions in this model) are distributed uniformly over [0. which position to take in the political spectrum). equivalently. for any x 2 [0. and the above model (which is sometimes called the Downsian model of political competition) is useful in identifying the implictions of such an objective about the pre-election behavior of the candidates. For instance. to complete the speci…cation of the model. and introduce the so-called spatial voting model. vote maximization is certainly one of the major goals of politicians. 7 11 . However. the mapping y 7! jx yj is strictly increasing on [0. We postulate in this regard that candidates share equally the votes that they attract together. however.) We model the society as a continuum. 1] likes the point 1=4 better than 1: More generally. and hence we choose to con…ne our attention here to the unidimensional setting. x] and strictly decreasing on [x. 1]. an individual with ideal point x in [0. and all this is known by the candidates.8 Of course. we must append to this setting a tie-breaking rule. Each candidate prefers to win the election to a tie for the …rst place. 1] and see for yourself.

neither can candidate 2. candidate 1 cannot win the election alone in equilibrium. in h this case. we again launch a direct attack. The conclusion is that in the unique equilibrium of the game both parties choose the median position. if there is a tie at the position pro…le (`1 . if i wins the election alone at the position pro…le (`1 . `2 ).) Suppose that (`1 .9 Life gets more complicated if a third candidate decides to join the race. contradicting that candidate 1 is the winner in the equilibrium outcome (`1 . 1] with f (x) > 0. the action spaces are A1 = A2 = A3 = [0. But it is easily checked that we cannot have `1 = `2 6= 1=2 in equilibrium (either party would then deviate to. then candidate 2 h can forceia win by choosing 1=2. 9 12 . 1] and the utility If you are careful. Thisiis because. `1 +1=2 (if `1 > 1=2) or in `1 +1=2 . say. We thus learn that `1 = `2 must be the case. she gets all the votes either in 0. depending on whether or not running in the election is costly. But then since `2 is a best response of candidate 2 against `1 . we must have `1 = 1=2: But this will not do either. `2 ): Therefore.Losing the election may or may not be the worst outcome for a candidate. `2 ) = 1 . We take up each of these possibilities in turn. `2 ) < ui (`1 . Now notice that if `1 6= 1=2. In this case the game at hand is a 2-person game with action spaces A1 = A2 = [0. Rather then computing the best response correspondences of the players. where `j denotes the policy chosen by candidate j = 1. (You are welcome to verify the validity of the subsequent analysis by using the best response correspondences of the candidates. because the best response of candidate 2 against `1 = 1=2 is to play 1=2 which forces a tie. 1]: The utility function for candidate i is given as 8 > 1. and by symmetry. that is. In the 3-person game that obtains in this case. Thus. 2: We wish to …nd the set of Nash equilibria of this voting game. Elections when running is not costly Let us …rst consider the case where there are two political candidates. If the distribution is given by an arbitrary continuous density function f on [0. it must also guarantee a win for her. `2 ) >2 : 0. and this is indeed an equilibrium as you can easily verify. the equilibrium would have both parties to locate on the median of this distribution. `2 ) is a Nash equilibrium. however. if i loses the election at the position pro…le (`1 . 1 (if `1 < 1=2) 2 2 which add up to more than the half of the total number of votes. you will notice that the assumption of carefully distributed individuals did not really play a role in arriving at this conclusion. the election is bound to end up in a tie in equilibrium. The only possibility of equilibrium outcome in this game is thus `1 = `2 = 1=2. Consider …rst the possibility that candidate 1 is winning the election at this position pro…le. 1=2). contradicting again the hypothesis that candidate 1 was the winner of the election in equilibrium.

So. 1=4. and thus force a win (which yields a payo¤ of 1=2 to each). 3=4) is actually strong or not. the candidate 2 becomes the winner alone. (Challenge: Compute all the Nash equilibria of this game. `2 . staying out is a meaningful alternative for each political candidate. Begin with observing that candidate 3 wins the election alone at this position pro…le. it is not. `3 ) = (1=4. What is more. 1=4]. at the pro…le (`1 . 3=4): The h observation key i here is that by doing so candidate 1 would attract the votes that belong to `1 +1=4 . then candidate 3 remains as the winner. 3: The equilibrium of this game is not a trivial extension of the previous game. `2 .where `j denotes the policy chosen by candidate j = 1. if i wins the election alone at the position pro…le (`1 . 1=2) does not correspond to a Nash equilibrium here. 1=2. and by moving slightly to the left (or right) of 1=2 any of the candidates can increase her share of the votes almost to 50%.) But either candidate 2 (3=4 > `1 1=2) or candidate 3 (if 1=2 `1 > 1=4) is bound to collect 37. `2 . she maintains a payo¤ level of 0 with any such choice. For. neither candidate 1 nor candidate 2 can force a win by means of a unilateral deviation. if i loses the election at the position pro…le (`1 .) Elections when running is costly In this case. `3 ) = 1 . candidates 1 and 2 can jointly deviate at this pro…le to. `1 +3=4 : 2 2 Thus in this case candidate 1 would get exactly the 25% of the total vote (see Figure 5. there is no strong Nash equilibrium of this game. 13 . it is readily observed that if candidate 1 chooses instead of 1=4 any position in the interval [0.) But in voting problems the issue of coalitions arise very naturally. `2 . (Challenge: Prove this. For. None of the candidates is thus playing optimally given the actions of others. `3 ) = (1=4. 1] for candidate 1 given the actions of others. say. 3=4). `3 ) >2 : 0. if i there is a tie at the position pro…le (`1 . 3=4). `3 ) < ui (`1 . `2 . 1=4. `3 ) = (1=4. 3=4) is an equilibrium. (`1 . Indeed. Consequently. It turns out that there are many Nash equilibria of this 3-person voting game. Indeed. Therefore. `2 . `2 . Less clear is the implication of choosing a policy in the interval (1=4. function for candidate i is given as 8 > 1. choosing 1=4 is as good as choosing any other position in [0. `3 ). this candidate obviously does not have any incentive to deviate from 3=4 given that the other two candidates position themselves at 1=4: Does candidate 1 (hence candidate 2) has a pro…table deviation? No. each candidate is getting approximately the 33% of the total votes at this pro…le. we model the situation as a game in strategic form by setting. `3 ) = (1=2. Therefore. 3=4 " for small " > 0. for each candidate i. 1=4. As an example let us verify that (`1 . and if she deviates to any position in the interval [3=4.5% of the votes in this case. So we better ask if the equilibrium (1=4. 1]. Given that (`1 . and we conclude that this outcome is a Nash equilibrium. 2.

and observe that this candidate can force a win by choosing a position very close to the median of f`1 . Thus staying out cannot be a best response for this candidate. 3g: i=1 The equilibrium analysis of this game is essentially identical to the …rst game considered above when n = 2: Consequently. Consider …rst the possibility that there are exactly two such candidates. we leave the related analysis as an Exercise. `2 . since each candidate can avoid losing by staying out of the election. all running candidates must tie for the …rst place in any equilibrium. we can conclude that there does not exist a Nash equilibrium for the 3-person voting game at hand. We are then left with the following …nal possibility: `1 6= `2 6= `3 6= `1 : To rule out this case as well. life is more complicated in the 3-person scenario. if if if if i i i i wins the election alone at the position pro…le ` ties for the …rst place at the position pro…le ` stays out at the position pro…le ` runs but loses the election at the position pro…le ` Q where ` 2 n Ai with n 2 f2. for otherwise. contradicting that we are at an equilibrium. but now this is not because of the multiplicity of equilibria. `3 ) is such an equilibrium. Suppose that `1 6= `2 = `3 : In this case. > >1 < . the remaining candidate can force a win by choosing slightly to the left (or right) of 1=2. Then. `2 . by the exercise above. `3 ) are distinct are ruled out similarly. we pick the leftist candidate. candidate 1 can force a win by getting very close to `2 (see this?). any one of the candidates can pro…tably deviate and force a win (why?). `3 g). `2 . `2 . …nally. it cannot be that everyone stays out in equilibrium. there cannot be only one running candidate in equilibrium. The …nal possibility is the case in which all three candidates choose not to stay out and tie for the …rst place. `3 ) must be distinct. `3 ): The two other possibilities in which exactly two components of (`1 . this game has no Nash equilibrium when n = 3: A sketch of proof can be given as follows. `2 . in any given equilibrium. both candidates must be choosing 1=2: But since the running candidates share the total votes. any other player may choose the same location with the running candidate and forces a tie for the …rst place (which is better than staying out). Similarly. Suppose that (`1 . Moreover. call her i (so we have `i = minf`1 . the unique Nash equilibrium of the game de…ned above is (1=2. The following table summarizes our …ndings in the four spatial voting games we have 14 . Therefore. So. Prove: If n = 2. 1] [ fstay outg and 8 > 1. On the contrary. `2 . 1=2): Once again. First observe that. If `1 = `2 = `3 . there must exist two or more running candidates who tie for the …rst place. ui (`) = 2 > 0. so at least two components of (`1 . and hence she cannot be best responding in the pro…le (`1 . > > : 1. `3 g.Ai = [0.

15 .examined above. The number of candidates 3 a Nash equilibrium does not exist there are many Nash equilibria but a strong Nash equilibrium does not exist 2 Running is costly not costly the only equilibrium is the median position the only equilibrium is the median position It is illuminating to observe how seemingly minor alterations in these voting models result in such vast changes in the set of equilibria.

the expected payo¤ of player 1 if she plays H is (1=2) (1) + (1=2) (¡1) = 0: Similarly. 1 T ¡1. then each player actually has an incentive to adopt a mixed strategy with these probabilities. As a simple illustration. as opposed to a pure strategy. Therefore. for example. In particular. Since we have argued that Nash equilibrium is a necessary condition for a steady state. rather than the above mixed 1 . it has no Nash equilibrium in pure strategies. We say that playing H and T with probabilities 1=2 and 1=2 respectively constitutes a mixed strategy equilibrium of this game. as we have done so far. In some situations a player may want to randomize between several actions.e. 1 1. if player 2 predicts that player 1 will play H and T with half probability each. In a pure strategy the player chooses an action for sure. in the sense that if each player predicts that the other player will play in this manner. player 1 plays H constantly. ¡1 ¡1. the expected payo¤ to action T is 0. H 1.. let each player play H and T with half probability each. If. Since player 2 plays H with probability 1=2. we say that the player is using a mixed strategy. If we assume that players repeatedly play this game and forecast each other’s action on the basis of past play. then she has no reason not to play in the speci…ed manner. this game has no Nash equilibrium (check). consider the following matching-pennies game. does that mean that the matching-pennies game has no steady state? To answer this question let us allow players to use mixed strategies. player 1 has no reason to deviate from playing H and T with probability 1=2 each. Similarly. We claim that this choice of strategies constitute a steady-state. ¡1 H T If we restrict players’ strategies only to actions. This shows that the strategy pro…le where player 1 and 2 play H and T with half probability each is a steady-state of this situation. whereas in a mixed strategy. If a player chooses which action to play randomly. In this section we will analyze the implications of allowing players to use mixed strategies. i.Mixed Strategy Equilibrium Levent Koçkesen 1 Introduction Up to now we have assumed that the only choice available to players was to pick an action from the set of available actions. she has no reason to deviate from doing the same. she chooses a probability distribution over the set of actions available to her.

2 De…nition 1 A mixed strategy ®i for player i. ®2 . Therefore. then it is reasonable that player 2 will come to expect him to play H again and play her best response. : : : .. Also suppose that 3=4 of the population plays H (is hawkish) and 1=4 plays D (is dovish).strategy. ®m ) such that ®k ¸ 0. i. To denote the strategy pro…le in which player i plays ®0i and ³ ´ the rest of the players play ®¤ .. for as soon as his opponent becomes able to predict his action. which is T: This will result in player 1 getting ¡1 as long as he continues playing H: Therefore. 1 3. any mixed strategy ®i for player i is an element of 4 (Ai ) . we will denote by ® = (®i )i2N a mixed strategy pro…le. a mixed strategy for each player in the game. : : : m. on average a dovish player gets a payo¤ of (3=4) (1) + (1=4) (3) = 3=2: A hawkish player gets (3=4) (0) + (1=4) (6) = 3=2 as well. for all k = 1. j 6= i. each player expects the opponent to play H with probability 3=4 and D with probability 1=4: Would a dovish player do better if she were a hawkish player? Well. Ai : In other words. ®i 2 4 (Ai ) : Following the convention we developed for action pro…les. she will be able to take advantage of the situation. 0 1. player 1 should try to mimic playing a mixed strategy by playing H and T with frequencies 1=2 and 1=2: Consider the Hawk-Dove game for a another motivation. ®¤ : Unless otherwise stated. if player i has m actions available. We claim that this is a stable population composition. Since the opponent is chosen randomly from a large population. and i i i i Pm k k=1 ®i = 1: . H 0. we will use ®0i . neither type of player has a reason to change his behavior. i. who both belong to a large population. play this game. but no player can identify the opponent’s type before the game is played. 2.e. 6 D 6. Notice that not all actions have to receive a positive probability in a mixed strategy. a mixed strategy is an m dimensional vector (®1 . he should try to be unpredictable. in which all but one action is played with zero probability.e. we will j ¡i assume that players choose their mixed strategies independently. is a probability distribution over his set of available actions. 2 Mixed Strategies and Expected Payo¤s We will denote by ®i (ai ) the probability assigned to action ai by the mixed strategy ®i : Let 4 (X) denote the set of all probability distributions on a set X: Then. 3 H D Suppose each period two randomly selected individuals. Therefore. Therefore. it is also possible to see pure strategies as degenerate mixed strategies.

i.1 0. if there were 3 actions for a player. ®2 ) . then each action pro…le is obtained with probability 1=4: Therefore. A mixed strategy pro…le could be ((1=2. m) + ®1 (m) ®2 (o) ui (m. We will assume that players’ preferences satisfy the assumptions of Von Neumann and Morgenstern so that the payo¤ to an uncertain outcome is the weighted average of the payo¤s to underlying certain outcomes. such that the player’s preferences over lotteries on A can be represented by the expected value of ui : If each outcome a 2 A occurs with probability p (a) . sometimes we may want to simplify the notation by de…ning.e. Therefore. over probability distributions over outcomes. 1=3)) : Notice that we always have ®1 (o) = 1 ¡ ®1 (m) and ®2 (o) = 1 ¡ ®2 (m) simply because probabilities have to add up to one. o) +®1 (o) ®2 (m) ui (o. 2=3) . there is a payo¤ function ui de…ned over the certain outcomes a 2 A. q ´ ®2 (m) .. P. o) = ®1 (m) [®2 (m) ui (m. and using (p. say p ´ ®1 (m) . m) + ®1 (o) ®2 (o) ui (o. 1=3) . o)] = ®1 (m) u1 (m. ®1 (o) = 2=3: For player 2. q) to denote a strategy pro…le. and player 2 chooses m with probability q and action o with probability 1 ¡ q: Notice.e. Once we allow players to use mixed strategies. (See Dutta. 2=3) .0 : o 0. (2=3.. For example if both players play m with probability 1=2 in the BoS game. m) + ®2 (o) ui (m.2 A possible mixed strategy for player 1 is (1=2. as a possible mixed strategy. we assume that for each player i. then the expected payo¤ of player i is ui (p) ´ X p (a) ui (a) : a2A Example 2 For example. then we would need at least two numbers to specify a mixed strategy for that player. or ®1 (m) = ®1 (o) = 1=2: Another is (1=3. ch. ®2 ) + ®1 (o) u1 (o. In other words. 3 . the outcomes are not deterministic anymore. i. ®2 (o) = 1=3. o)] +®1 (o) [®2 (m) ui (o.. 1=3)) another could be ((1=3. ®2 ) = ®1 (m) ®2 (m) ui (m. rather than preferences over certain outcomes.Let us illustrate these concepts by using the Battle of the Sexes game that we introduced before: m o m 2. or ®1 (m) = 1=3. we may have (2=3. (2=3. then the expected payo¤ of player i is given by u1 (®1 . weight attached to each outcome being the probability with which that outcome occurs. 1=2) . 1=2) . m) + ®2 (o) ui (o. ®2 (m) = 2=3. in the BoS game if each player i plays the mixed strategy ®i .0 1. we have to specify players’ preferences over lotteries. 27 for more on this). where player 1 chooses m with probability p and action o with probability 1 ¡ p.

since ®i (o) = 1 ¡ ®i (m) .e. ®2 ) = 1 ¡ 1=3 + 1 [3 £ (1=3) ¡ 1] = 2=3 and u2 (®1 . ®2 ) = 2 ¡ 2®1 (m) + ®2 (m) [3®1 (m) ¡ 2] : For example. we can write these expected payo¤s as u1 (®1 . ®1 (m) = 1. supp(®1 ) = fmg . u1 (®1 . and player 2 plays m with probability 1=3. then u1 (®1 . ®2 ) = 2®1 (m) ®2 (m) + (1 ¡ ®1 (m)) (1 ¡ ®2 (m)) = 1 ¡ ®2 (m) + ®1 (m) [3®2 (m) ¡ 1] and u2 (®1 . og : 3 Mixed Strategy Equilibrium De…nition 4 Best response correspondence of player i is the set of mixed strategies which are optimal given the other players’ mixed strategies. i. o) (0) +®1 (o) ®2 (m) (0) + ®1 (o) ®2 (o) (1) = 2®1 (m) ®2 (m) + ®1 (o) ®2 (o) . and supp(®2 ) = fm. ®¡i ) : ®i 24(Ai ) 4 .or. if player 1 plays m for sure.. and that of player 2 is u2 (®1 .e. In other words: Bi (®¡i ) = arg max ui (®i . ®2 ) = ®1 (m) ®2 (m) + 2®1 (o) ®2 (o) : Notice that. supp (®i ) = fai 2 Ai : ®i (ai ) > 0g : In the above example we have. i.. ®2 ) = 2 ¡ 2 (1) + (1=3) [3 (1) ¡ 2] = 1=3: De…nition 3 The support of a mixed strategy ®i is the set of actions to which ®i assigns a positive probability. ®2 ) = ®1 (m) ®2 (m) (2) + ®1 (m) ®2 (o) ui (m.

if q < 2=3 [See Figure 1. 1=2)) = f(1. ®¤ ) such that. (2=3. letting p ´ ®1 (m) . : : : . 5 ³ ´ . : : : . 0)g : In general. (0. ((0. (1=2. then. we have u1 (®1 . if q < 1=3 8 > f1g . > < The best response correspondence of player 2. 0) . 1)) .. (0. ®¤ ¡i i or ®i 24(Ai ) ®¤ 2 Bi ®¤ : i ¡i In the Battle of the Sexes game. 1=3) . and q ´ ®2 (m) . B1 ((1=2. if q = 2=3 : > : f0g . (1. 1] . 1=2)) = 1 ¡ 1=2 + ®1 (m) [3 (1=2) ¡ 1] 1 1 = + ®1 (m) 2 2 therefore. player i could transfer probability from the action which is not a best response to an action which is a best response and strictly increase his payo¤. is if p > 2=3 B2 (p) = > [0.] De…nition 6 A mixed strategy equilibrium is a mixed strategy pro…le (®¤ . if q = 1=3 : > : f0g . ®2 (m)) : (1. 1=3)g : Remark 1 A mixed strategy ®i is a best response to ®¡i if and only if every action in the support of ®i is itself a best response to ®¡i : Otherwise. 1) . we may say that the set of mixed strategy equilibria is f(®1 (m) . 1) . (1=3. ((2=3.e. optimal choices of q in response to p. > < 8 > f1g . 0)) . 2=3))g : Alternatively. 1] . n ³ ´ ®¤ 2 arg max ui ®i . 0) .Example 5 Suppose ®2 (m) = 1=2: Then. 1 n for all i = 1. we can express the best response of player 1 in terms of optimal choice of p in response to q if q > 1=3 B1 (q) = > [0. i. the set of mixed strategy Nash equilibria is f((1.

then it must be that the expected payo¤s to m and o are the same for both player against ®¤ : In other words. Remark 3 One implication of the above remark is that a nondegenerate mixed strategy equilibrium is not strict. if (®¤ . A mixed strategy pro…le ®¤ is a mixed strategy Nash equilibrium if and only if for each player i. ®¤ ) is a mixed strategy equilibrium with supp(®¤ ) =supp(®¤ ) = 1 2 1 2 fm.q B2(p) 1 B1(q) 1/3 0 2/3 1 p Figure 1: Best Response Correspondences in BoS game Remark 2 This suggests an easy way to …nd mixed strategy Nash equilibrium. og. for player 1 ¡i 2®¤ (m) = 1 ¡ ®¤ (m) 2 2 and for player 2 ®¤ (m) = 2 ¡ 2®¤ (m) 1 1 which imply that ®¤ (m) = 1=3 2 ®¤ (m) = 2=3 1 Proposition 8 Every …nite strategic form game has a mixed strategy equilibrium. Example 7 In the BoS game. each action in the support ¡i i ¤ of ®i yields the same expected payo¤ when played against ®¤ . 6 . each action in the support of ®¤ is a best response to ®¤ : In other words. and no other action yields a ¡i strictly higher payo¤.

0 : 0. it is possible that an action is not dominated by any other action. 1 T M B Clearly. 1 3. a¡i ) for all a¡i 2 A¡i : Example 10 Consider the following game. However. player i’s mixed strategy ®¤ strictly dominates her i 0 action ai if ui (®i . (see example 118. yet it is dominated by a mixed strategy. but the mixed strategy ®1 (M ) = 1=2. no action dominates T. De…nition 9 In a strategic form game. 1 R 1. only if there existed another action which weakly or strictly dominated that action.4 Dominated Actions and Mixed Strategies In earlier lectures we de…ned an action to be weakly or strictly dominated. L 1.2 in Osborne chapter 4 on my web page). 7 . ®1 (B) = 1=2 strictly dominates T: Remark 4 A strictly dominated action is never used with positive probability in a mixed strategy equilibrium To …nd the mixed strategy equilibria in games where one of the players have more than two actions one should …rst look for strictly dominated actions and eliminate them. 3 4. a¡i ) > ui (a0i . 0 0.

A game with incomplete information. types could be the privately observed costs in an oligopoly game. pi : £i ! 4 (£¡i ) ui : A £ £ ! R: The function pi summarizes what player i believes about the types of the other players given her type. on the other hand. pi (µ¡i jµi ) is the conditional probability assigned to the type pro…le µ¡i 2 £¡i : Similarly. These are called games with complete information. So. tries to model situations in which some players have private information before the game begins. ² a payo¤ function.Bayesian Games Levent Koçkesen So far we have assumed that all players had perfect information regarding the elements of a game. for all i 2 N: A pure strategy for player i in a Bayesian game is a function which maps player i’s type into her action set ai : £i ! Ai . Ai . (£ = £i2N £i ) ² a probability function. and for each i 2 N ² an action set. (A = £i2N Ai ) ² a type set. or privately known valuations of an object in an auction. Ai and £i are all …nite. ng . so that ai (µi ) is the action choice of type µi of player i: A mixed strategy for player i is ®i : £i ! 4 (Ai ) 1 . For example. ui (ajµ) is the payo¤ of player i when the action pro…le is a and the type pro…le is µ: We call a Bayesian game …nite if N. The initial private information is called the type of the player. £i . It consists of: ² a set of players. N = f1. : : : . etc. 1 Preliminaries A Bayesian game is a strategic form game with incomplete information.

Player i’s possible types are µi and µ0i : Suppose that the types are independently distributed and the probability of µ1 is p and the probability of µ2 is q: For a given pure strategy pro…le a¤ the expected payo¤ of player 1 of type µ 1 is qu1 (a¤ (µ1 ) .1 0. µ2 ) + (1 ¡ q) u1 (a¤ (µ1 ) . a¤ (µ02 ) jµ1 .1 Some Examples Battle of the Sexes with incomplete information.1 1. This is because. can be any private information that is relevant to the player’s decision making. ®j (aj jµj )A ° (ai ) ui (ajµ) : 1 3 3. The following tables give the payo¤s to each action and type pro…le: B S B S 2.0 type h . if they were to play the game. analysis of that player’s decision problem requires us to consider what each type of the other players would do. a2 jµ1 . µ02 ) 1 2 ³ 0 ´ 2 Bayesian Equilibrium De…nition 1 A Bayesian equilibrium of a Bayesian game is a mixed strategy pro…le ® = (®i )i2N . Remark 2 Notice that. µ 02 ) : 1 2 1 2 Similarly. in general. in the de…nition of a Bayesian equilibrium we need to specify strategies for each type of a player.0 0. and so on. for a given mixed strategy pro…le ®¤ the expected payo¤ of player 1 of type µ1 is q a2A X ®¤ (a1 jµ1 ) ®¤ (a2 jµ2 ) u1 (a1 . even if in the actual game that is played all but one of these types are non-existent.so that ®i (ai jµi ) is the probability assigned by ®i to action ai by type µi of player i: Suppose there are two players. µ2 ) + (1 ¡ q) 1 2 a2A X ®¤ (a1 jµ1 ) ®¤ a2 jµ2 u1 (a1 . Player 1 has only one type and does not know which type is player 2. a¤ (µ2 ) jµ1 . a2 jµ1 . Her beliefs place probability 1=2 on each type.0 1. given a player’s incomplete information. Suppose player 2 has perfect information and two types l and h: Type l loves going out with player 1 whereas type h hates it. player’s beliefs about other players’ payo¤ functions. player 1 and 2 and for each player there are two possible types.0 0. we have ®i (:jµi ) 2 arg max °24(Ai ) µ¡i 2£¡i X pi (µ¡i jµi ) a2A X 0 @ j2Nnfig Y Remark 1 Type. such that for every player i 2 N and every type µi 2 £i . her beliefs about what other players believe her beliefs are.2 type l B S 2 B S 2.2 0. such as the payo¤ function.

hg ² p1 (ljx) = p1 (hjx) = 1=2. Sg ² £1 = fxg . ² action S is 2 (1 ¡ ®1 (B)) so that his best response is to play B if ®1 (B) > 2=3 and to play S if ®1 (B) < 2=3: Player 2 of type h : Given player 1’s strategy ®1 .We can represent this situation as a Bayesian game: ² N = f1.e. Since player 1 has only one type (i. his expected payo¤ to ² action B is ®1 (B) . ² action S is 2®1 (B) so that his best response is to play B if ®1 (B) < 1=3 and to play S if ®1 (B) > 1=3: Player 1 : Given player 2’s strategy ®2 (:jl) and ®2 (:jh) . his type is common knowledge) we will omit references to his type from now on. 2g ² A1 = A2 = fB. u2 are given in the tables above. Let us …nd the Bayesian equilibria of this game by analyzing the decision problem of each player of each type: Player 2 of type l : Given player 1’s strategy ®1 .. her expected payo¤ to ² action B is ² action S is 1 ®2 (Bjl) + ®2 (Bjh) 1 (1 ¡ ®2 (Bjl)) (1) + (1 ¡ ®2 (Bjh)) (1) = 1 ¡ : 2 2 2 1 1 ®2 (Bjl) (2) + ®2 (Bjh) (2) = ®2 (Bjl) + ®2 (Bjh) . his expected payo¤ to ² action B is (1 ¡ ®1 (B)) . 2 2 3 . p2 (xjl) = p2 (xjh) = 1: ² u1 . £2 = fl.

Let’s check if ®2 (Bjl) = 1 and ®2 (Bjh) = 0 could be part of an equilibrium. the following is another Bayesian equilibrium of this game (®1 (B) . 0) is a Bayesian equilibrium. is ¤ q2 (µ 2 ) = 1 1 ¤ ¤ max q1 (1 ¡ q1 ¡ q2 (3=4)) + q1 (1 ¡ q1 ¡ q2 (5=4)) q1 2 2 4 ½ ¾ . 2=3. ®2 (Bjl) = ®2 (Bjh) = 1: In this case player 1’s best response is to play B as well to which playing B is not a best response for player 2 type h: Similarly check that ®2 (Bjl) = ®2 (Bjh) = 0 and ®2 (Bjl) = 0 and ®2 (Bjh) = 1 cannot be part of a Bayesian equilibrium.e. on the other hand. 0) : As an exercise show there is one more equilibrium given by (®1 (B) . 1. implies that ®2 (Bjh) = 0: Since ®2 (Bjh) = 0 is a best response to ®1 (Bjx) = 2=3. her best response is to play B if ®2 (Bjl) + ®2 (Bjh) > 2=3 and to play S if ®2 (Bjl) + ®2 (Bjh) < 2=3: Let us …rst check if there is a pure strategy equilibrium in which both types of player 2 play B. Then. 2=3) : 3. ®2 (Bjh)) = (1. ®2 (Bjh)) = (1=3. Clearly.Therefore. Suppose only type l mixes. but …rm 2 has private information about its type µ2 : Firm 1 believes that µ 2 = 3=4 with probability 1=2 and µ 2 = 5=4 with probability 1=2. We will look for a pure strategy equilibrium of this game. 0. i. Firm 2 of type µ 2 ’s decision problem is to max q2 (µ 2 ¡ q1 ¡ q2 ) q2 which is solved at µ2 ¡ q1 : 2 Firm 1’s decision problem. there is no equilibrium in which both types of player 2 mixes. (®1 (Bjx) . and this belief is common knowledge. in turn. ®2 (Bjl) . which implies that ®2 (Bjl) + ®2 (Bjh) = 2=3: This.2 Cournot Duopoly with incomplete information. ®1 (B) = 2=3. ®2 (Bjh)) = (2=3. ®2 (Bjl) . In this case player 1’s best response is to play B: Player 2 type l’s best response is to play B and that of type h is S: Therefore. ®2 (Bjl) . The pro…t functions are given by ui = qi (µi ¡ qi ¡ qj ) : Firm 1 has one type µ1 = 1.

q2 (3=4) = .which is solved at ¤ q1 = ¤ ¤ 2 ¡ q2 (3=4) ¡ q2 (5=4) : 4 Solving yields. q2 (5=4) = : 3 24 24 5 . 1 ¤ 11 ¤ 5 ¤ q1 = .

pp. and that bidder wins the object at the …nal price. In a common value auction. 4. oral. cars. which are also used by …rms to buy inputs or to subcontract work. “Auction Theory: A Guide to the Literature.Auctions Many economic transactions are conducted through auctions. For example. 227-286. English auction): the price is raised until only one bidder remains. the highest bidder wins but pays the second highest bid. foreign exchange. second-price sealed bid auction (also known as Vickrey auction2 ). 3. wars of attrition. mineral rights. descending-bid auction (also called Dutch auction): the auctioneer starts at a very high price and lowers it continuously until someone accepts the currently announced price. but bidders have di¤erent private information regarding that value.1 There are four commonly used and studied forms of auctions: 1. That bidder wins the object at that price. the …rst-price sealed bid auction is strategically equivalent to descending bid auction and the second-price sealed bid auction is strategically equivalent to ascending bid auction. Bidders submit their bids in a sealed envelope. In a private value auction each bidder’ valuation is known only by the bidder. For a good introductory survey to the auction theory see Paul Klemperer (1999).” Journal of Economic Surveys. Art work. and more recently airwave spectrum rights via auctions. ascending-bid auction (also called the open. July 1999. for example. publicly owned companies. and houses are also sold by auctions. not only because they are simpler to analyze but also because in the private values case. Governments sell treasury bills. 2. antiques. We will analyze sealed bid auctions. 13(3). or. 1 1 . Takeover battles are e¤ectively auctions as well and auction theory has been applied to areas as diverse as queues. Government contracts are awarded by procurement auctions. 2 Named after William Vickrey of Columbia University who was awarded the Nobel Prize in economics in 1996. and lobbying contests. in an s artwork or antique auction. and the object is sold to the highest bidder at her bid. as it could be the case. Auctions also di¤er with respect to the valuation of the bidders. …rst-price sealed bid auction: each bidder submits her bid in a sealed envelope without seeing others’bids. the value of an oil tract or a company maybe the same for everybody but di¤erent bidders may have di¤erent estimates of that value. the actual value of the object is the same for everyone.

and jfj : aj = ai gj = m m ui (a. in a complete information framework in which each bidder knew the valuations of every other bidder. we will assume that each bidder knows only her own valuation. 1. and for each i 2 N a type set (set of possible valuations). 2. v) = 0.pay x win. : : : . In particular. v] : a payo¤ function. namely First Price and Second Price Auctions. which is de…ned for any a 2 A. you sometimes win when you should lose (ai > x > vi ): 2 . Ai = R+ (actions are bids) a belief function: player i believes that her opponents’valuations are independent draws from a distribution function F that is strictly increasing and continuous on [v. pay x a0i < x < vi lose win. The following elements de…ne the general form of an auction that we will analyze: Set of bidders. i = [v.pay x a00 < x i lose : lose lose a0i vi a00 i By bidding smaller than vi . v] . vi .1 Independent Private Values Previously. we have looked at two forms of auctions. the following table gives the i payo¤s to each of these actions by i x a0i win/tie. v 2 as follows ( vi P (a) . and the valuations are independently distributed random variables whose distributions are common knowledge. pay x win. bidding own valuation vi is weakly dominant for each player i: To see this let x be the highest of the other bids and consider bidding a0i < vi . and a00 > vi : Depending upon the value of x. highest bidder wins and pays a price equal to the second highest bid. pay vi win. if aj ai for all j 6= i. ng . Although there are many Bayesian equilibria of second price auctions. you sometimes lose when you should win (ai < x < vi ) and by bidding more than vi . pay x x = vi lose tie. In this section we relax the complete information assumption and revisit these two form of auctions.1 Second Price Auctions In this design. if aj > ai for some j 6= i where P (a) is the price paid by the winner if the bid pro…le is a: Notice that in the case of a tie the object is divided equally among all winners. pay vi vi < x a00 i lose lose win/tie. N = f1. v 0: an action set. pay x win.

v] :(see Fudenberg and i (vi ) . but (v) < v for v > v: That is.e. The …rst order condition for maximizing the expected payo¤ is F 1 (b) n 1 + (v b) (n 1) F 1 (b) n 2 F0 1 (b) 1 0 1 (b) = 0: by the fact that is almost everywhere di¤erentiable (since it is strictly increasing). i. 3 . Furthermore. (v) 2 (v) F (v)n 1 + (n 1) (v) F 0 (v) F (v)n 2 = (n 1) vF 0 (v) F (v)n which is a di¤erential equation in : Integrating both sides. i (v) = (v) for all i 2 N: First. The expected payo¤ of player with type v who bids b when all the others are bidding according to is given by (v b) prob (highest bid is b) = (v = (v b) (prob ( (v) b) F 1 b))n : 1 (b) n 1 (Because vi are independently distributed). 1991). and check if they are once we locate a possible s equilibrium.. notice that (v) = v. i. 1]. the highest bidder wins and pays her bid. (v) = v Rv v F (x)n F (v)n 1 1 dx : One can easily show that is continuous and strictly increasing in v as we hypothesized. let’ calculate s assuming F is uniform on [0. although we will not attempt to do so here. As an exercise. are strictly increasing and continuous on [v. and by the inverse function theorem. let’ assume that they are. So. Let us denote the bid of player with type vi by i (vi ) and look for symmetric equilibria. except the player with the lowest valuation. F (x) = x. F (v)n or 0 1 + (v (v)) (n 1) F (v)n 2 F 0 (v) 0 1 = 0. it is possible to show that strategies (v) . and hence Tirole. For (v) to be an equilibrium …rst order condition must hold when we substitute (v) for b.1. we get Z v n 1 (v) F (v) = (n 1) xF (x)n 2 F 0 (x) dx v Z v n 1 = vF (v) F (x)n 1 dx: v Solving for (v) . everybody bids less than her valuation.2 First Price Auctions In …rst price auctions.e.

political contests. 1]: Then. war-ofattritions. Again.(v) = v Rv 0 xn 1 dx =v vn 1 1 vn n 1 = v: n 1 n v n Uniform example solved explicitly: Let’ look for a symmetric equilibrium of the s form (v) = av: The expected payo¤ of player with type v who bids b when all the others are bidding according to is given by (v b) prob (highest bid is b) = (v = (v b) (prob (av b) (b=a)n 1 b))n 1 : The …rst order condition for maximizing the expected payo¤ is (v which is solved at b= b) (n n n 1) = b. this becomes v 1 prob ( (v) 1 b)n b: 1 b (b) n 1 (b) n 1 b: The …rst order condition for maximizing the expected payo¤ is 1 + v (n For 1) 1 (b) n 2 0 1 1 (b) = 0: (v) for b. i (v) = (v) for all i 2 N:The s expected payo¤ of player with type v who bids b when all the others are bidding according to is given by v prob (highest bid is b) b=v = vF Let F be uniform over [0. let’ look for a symmetric equilibrium.3 All-Pay Auctions Consider an auction in which the highest bidder wins the auction but every bidder pays his/her bid. (v) 1 or 0 (v) = (n 4 1) v n . This model could model bribes. 1 v: 1. (v) to be an equilibrium …rst order condition must hold when we substitute 1 + v (n 1) v n 2 0 1 = 0. etc. Olympic competition.

Also suppose that n independent values are drawn from the same distribution to form a random sample (v1 . respectively. v2 . what are the expected values of the highest and second highest values? Order statistics provide the answer. In particular v(n) is the highest and v(n 1) is the second highest order statistics. Order Statistics Suppose that v is a real-valued random variable with distribution function F and density function f . Given that there are n bidders who each has a value (drawn independently from a common distribution). Fk (x) = prob (vk x) x) F (x)) [F (x)] = prob ((number of v’ that are x) s n X n! F (x)j (1 F (x))n j = j! (n j)! j=k 5 k) .which is a di¤erential equation in : Integrating both sides. v2 . : : : . Therefore. we get Z v (v) = (n 1) xn 1 dx 0 n Notice that as n increases the equilibrium bid decreases. vn ) and call it kth order statistic. the seller’ expected revenue in a …rst price and a s n second price auction depends on the expectation of the highest and the second highest value. vn ) : Let v(k) denote the kth smallest of (v1 . = (n 1) vn: 2 Revenue Equivalence In second price auctions each bidder bids her value and pays the second highest. Therefore. : : : . Fn 1 x = prob (all vi x) (x) = prob v(n n 1) x 1) of v’ are s n 1 = prob (either n or (n = [F (x)] + n (1 In general. the expected revenue of the seller is the expected second highest value. In a …rst price auction. the highest bidder is the one with the highest value and bids a function of her value. Let Fk denote the distribution function of v(k) : Let’ start with the distribution s function of v(n) : Fn (x) = prob v(n) = [F (x)]n : Similarly. which is n 1 vmax in our example above.

the number of the bidders are the same and the bidders are risk-neutral.e. the bidder with the lowest value expects zero surplus. i. Fn 1 (x) = xn + n (1 1 x) xn 1 fn (x) = nxn 1 . yields the same expected revenue. both auction forms generate the same expected revenues. all four types of the auctions yield the same expected revenue for the seller in the case of independent private values and risk neutrality. i. E[R2 ] = E[v(n 1) ] Z 1 x (n 1) n (1 x) xn 2 dx 0 = E[R1 ] = = = n n n n 1 1 E[v(n) ] n n+1 n 1 : n+1 Therefore. 3.. if F is uniform on [0. Therefore. An all-pay auction. (x) = (n Z 1) n (1 x) xn 2 : 1 E[v(n) ] = = and E[v(n 1) ] = xnxn 1 dx n n+1 0 n 1 : = n+1 Now. satis…es the conditions of the theorem and hence must yield the same expected revenue. fn Therefore. n+1 and the revenue in the …rst price auction is the expected bid of the bidder with the highest value. the object always goes to the buyer with the highest value. for example. F (x) = x.. This theorem also allows us to calculate bidding strategies of other auctions. This is an illustration of the revenue equivalence theorem: Theorem Any auction with independent private values with a common distribution in which 1. 6 . in a second price auction the expected revenue is the expected second highest value n 1 . we have Fn (x) = xn . 2.e.Hence. 1].

for example. bi (ti ) = ti is a Nash equilibrium. Therefore. which implies that a = 1: Therefore. where v is the common value but bidder i observes only the signal ti : Assume that each ti is distributed independently and has a uniform distribution over [0. it is likely that the other bidders received worse signals than the winner. 1 2a b1 = t1 : t1 + b1 2a b1 1 b1 + a a 1 2a 1 = 0. As a comparison consider the independent private values case where vi = ti + 0:5: Note that this is the expected value in the above model conditional upon observing ti (but not conditional upon winning). if a bidder wins the auction. Let’ look for an equilibrium of the form bi (ti ) = ati + c: The s 7 . Denote the strategies by bi (ti ) and look for an equilibrium in which bi (ti ) = ati : The expected payo¤ of player 1 given that player 2 bids according to b2 (t2 ) = at2 is given by U1 (b1 . s As an example suppose v = t1 +t2 . i. bidders have all the same value but each bidder only observes a private signal about the value. then the winner might bid an amount more than the actual value of the object. t1 ) = E[v b1 j b1 > b2 ]prob (b1 > b2 ) b1 j b1 > at2 ]prob (b1 > at2 ) = E[t1 + t2 = E[t1 + t2 b1 j t2 < b1 =a]prob (t2 < b1 =a) b1 = (t1 + E[t2 j t2 < b1 =a] b1 ) a b1 b1 = t1 + b1 2a a First order condition is @U1 (b1 . 1]: This. the value of the object conditional on winning is smaller than the unconditional expected value.3 Common Values and The Winner’ Curse s In a common value auction. If this is not taken into account. is the highest bidder. In other words. Suppose that the auction is a …rst-price sealed bid auction.. a situation known as the winner’ curse. t1 ) = @b1 which implies that 2 1 For b1 = at1 to be optimal we must have 2 1 1 2a at1 = t1 .e. be a takeover battle where the value of the target company is the same but each bidder obtains an independent signal about the value.

for example. t1 ) = @b1 b1 a c + (t1 + 0:5 b1 ) 1 =0 a which is solved at 1 1 b1 = c + t1 + 0:25: 2 2 For b1 (t1 ) = at1 + c to be optimal. 2 2 t2 1 b2 (t2 ) = + 2 2 b1 (t1 ) = constitutes a Bayesian Nash equilibrium (indeed the unique equilibrium) of this auction. the …rst order condition is @U1 (b1 . 1]. we must have 1 1 c + t1 + 0:25 = at1 + c. then 8 . t1 ) = E[v b1 ]prob (b1 > b2 ) b1 ]prob (b1 > at2 + c) b1 c b1 ) prob t2 < a b1 c b1 ) a = E[t1 + 0:5 = (t1 + 0:5 = (t1 + 0:5 if a b1 a + c: Assume that this holds. Then. c = 0:5: Therefore.. might want to generate the highest revenue from the auction. For example. (Notice that this satis…es that above restriction a b1 a + c for all t1 :) Also note that t1 1 + 2 2 t1 for all t1 2 [0. i. 2 2 which implies that a = 0:5. whereas this value does not depend on the event of winning or not winning.expected payo¤ of player 1 to bidding b1 given that player 2 is using the strategy at2 + c is U1 (b1 . 4 Auction Design The auctioneer may have di¤erent objectives in designing an auction. The reason is that the expected value of the object is smaller conditional upon winning in common value auctions. that the company goes to the bidder with the highest valuation for it. or might want to make sure that it is e¢ cient. The government which is privatizing a company. Auction theory helps in designing auctions by comparing di¤erent auction formats in terms of their equilibrium outcomes.e. or to a bidder with some other characteristics. hence there is always underbidding in common value auctions. t1 1 + . if the objective is to generate the highest revenue.

independent values with the same number of risk neutral bidders. Collusion is relevant because revenue equivalence does not hold if there is collusion. 1 which implies that p = 1=2: So. 4. ascending auction raises more expected revenue than the second-price sealed bid auction. Assuming that the costs are sunk and therefore the total payo¤ of the seller is given by the total revenue. 9 . the cases where the values are correlated (as in the case of common value auctions).1]. if F is uniform over [0. auction design becomes a challenging matter. if the signals received by the bidders are positively correlated. remember that the expected revenue from an auction increases in the number of bidders even when the revenue equivalence holds. as long as the reserve price is set right. collusion and entry-deterrence also becomes relevant design problems. In the case of private. which in turn beats the …rst-price auction. In practice. the seller will not make any money.2 Common Values We have seen above that …rst-price sealed bid auction leads to lower bids in the case of common value auctions.di¤erent auction formats may be compared on the basis of the expected equilibrium revenues to …nd the best one. the expected payo¤ is given by E[R (p)] = prob (sale occurs at price p) = prob (p < v) = (1 This is maximized when f (p) p + (1 or p= So. unless she sets a reserve price.1 Need for a Reserve Price If there is only one bidder who comes to the auction. revenue equivalence theorem says that the format does not matter. What is the optimal reserve price? This is similar to the case where the seller is a monopoly and tries to …nd the optimal price. an optimal auction must set a reserve price of 0:5 in this particular case. and hence the auctioneer has an incentive to prevent entry-deterrence. In general. Also. Therefore. p= 1 p 1 F (p)) = 0 F (p)) p: p p F (p) : f (p) 4. or the bidders are risk averse.

i. all bidders will have to pay these very high prices. 169-189. 1996-97 U. 4. the “punishment”bids cost U. West nothing A related phenomenon can arise in one special kind of sealed-bid auction. a risk-neutral seller faced with risk-averse bidders prefers the …rst-price or (descending) Dutch auction to second-price or (ascending) English auctions.g.S. and dropped out of that market. bidders can use the early stages when prices are still low to signal who should win which objects.4 4.4. McLeod got the point that it was being punished for competing in Rochester..18 mil. U. 3 10 .4. on blocks 1-6 (18.3 Risk-Averse Bidders In a second price auction risk aversion does not matter.378 and $62. Since McLeod made subsequent higher bids on the Iowa licenses. 16(1). Therefore. and then the good is sold at the single price determined by the lowest winning bid. on blocks 1-5 and 20 mil.S.a licence in Rochester. West bid $313. bidders can submit bids that ensure that any deviation from a (tacit or explicit) collusive agreement is severely punished: each bidder bids very high prices for smaller quantities than its collusively agreed share. and then tacitly agree to stop pushing prices up. However." Journal of Economic Perspectives 2002. Mannesman bid 18. spectrum auction: U. an increase in risk aversion leads to higher bids since it increases the probability of winning at the cost of reducing the value of winning.. West was competing vigorously with McLeod for lot number 378 . This was like an o¤er to T-Mobil (the only other credible bidder) to bid 20 mil.S.1 Practical Concerns3 Collusion A major concern in practical auction design is the possibility that the bidders explicitly or tacitly collude to avoid higher prices. electricity (that is. 1999 Germany spectrum auction: any new bid must exceed the previous one by at least 10 percent. This is exactly what happened. Then if any bidder attempts to obtain more than its agreed share (leaving other …rms with less than their agreed shares). it submits a demand function).e. This part is based on Paul Klemperer. the bidders always bid their values. "What Really Matters in Auction Design. on blocks 1-5 and not bid on blocks 6-10.S.1'20). As an example consider a multi-unit (simultaneous) ascending auction.378 for two licences in Iowa in which it had earlier shown no interest. In this format. In a …rst-price auction however. namely a uniform-price auction in which each bidder submits a sealed bid stating what price it would pay for di¤erent quantities of a homogenous good. In such an auction. Minnesota. overbidding McLeod who had seemed to be the uncontested high-bidder for these licenses. e.18 1.

since a bidder with a higher value always has the opportunity to rebid to top a lower-value bidder who may initially have bid more aggressively. it can eventually top any opposition. 4. As a result.3 Solutions Much of our discussion has emphasized the vulnerability of ascending auctions to collusion and predatory behavior. Zeneca expressed willingness to o¤er about 10 billion pounds if it could be sure of winning. 11 . It would be much better to solve these problems with better auction designs. because even if it is outbid at an early stage. Eventually. 4. But certain synergies made Wellcome worth a little more to Glaxo than to the other …rms. there is a strong presumption that the …rm which values winning the most will be the eventual winner. Ascending auctions are often particularly poor in this respect.if everyone sticks to their agreed shares then these very high prices will never need to be paid. In an ascending auction. So deviation from the collusive agreement is unpro…table. a multi-unit ascending auction makes it more likely that bidders will win e¢ cient bundles than in a pure sealed-bid auction in which they can learn nothing about their opponents’intentions. The electricity regulator in the United Kingdom believes the market in which distribution companies purchase electricity from generating companies has fallen prey to exactly this kind of “implicit collusion. However. since they can allow some bidders to deter the entry. If there are complementarities between the objects for sale. and may not do so if they have even modest costs of bidding. or depress the bidding. other …rms have little incentive to enter the bidding. while Roche considered an o¤er of 11 billion pounds. and the costs of bidding were tens of millions of pounds.2 Entry Deterrence The second major area of concern of practical auction design is to attract bidders.4. After Glaxo’ …rst bid of 9 billion s s pounds.4. literally a billion or two less than its shareholders might have received. of rivals. since an auction with too few bidders risks being unpro…table for the auctioneer and potentially ine¢ cient. as well. and Wellcome was sold at the original bid of 9 billion pounds.there was money left on the table” . trying to outlaw it all would require cumbersome rules that restrict bidders’ ‡ exibility and might generate ine¢ ciencies. Glaxo’ 1995 takeover of the Wellcome drugs company. neither Roche nor Zeneca actually entered the bidding. An ascending auction is particularly likely to allocate the prizes to the bidders who value them the most.” Much of the kind of behavior discussed so far is hard to challenge legally. without being fully e¤ective.. ascending auctions have several virtues.. Indeed. Wellcome’ own s chief executive admitted “.

Tacit collusion is particularly di¢ cult since …rms are unable to use the bidding to signal. each bidder simultaneously makes a single “best and …nal”o¤er. In a standard sealed-bid auction (or “…rst-price”sealed-bid auction). the merit of a sealed-bid auction is that the outcome is much less certain than in an ascending auction. A solution to the dilemma of choosing between the ascending (often called “English” and ) sealed-bid (or “Dutch” forms is to combine the two in a hybrid. even when they would surely lose an ascending auction. Lots can be aggregated into larger packages to make it harder for bidders to divide the spoils. For example. often captures the best features of both. From the perspective of encouraging more entry. and the winner pays his bid. “Auctions with Almost Common Values. Good auction design is not “one size …ts all” and must be sensitive to the details of the context. so collusion is much harder than in an ascending auction because …rms are unable to retaliate against bidders who fail to cooperate with them. In an Anglo-Dutch auction the auctioneer begins by running an ascending auction in which price is raised continuously until all but two bidders have dropped out. 75769.). It follows that potential entrants are likely to be more willing to enter a sealed-bid auction than an ascending auction. But while these measures can be useful. and was …rst described and proposed in Klemperer (1998. and because it wants to get a bargain its sealed-bid will not be the maximum it could be pushed to in an ascending auction. but it must make its single …nal o¤er in the face of uncertainty about its rivals’bids. they do not eliminate the risks of collusion or of too few bidders. s A number of methods to make the ascending auction more robust are clear enough. An alternative is to choose a di¤erent type of auction. 12 . pp. and keeping secret the number of bidders remaining in the auction also makes collusion harder.Allowing bidders to learn about others’valuations during the auction can also make the bidders more comfortable with their own assessments and less cautious. 42. the exact increments can be prespeci…ed. and often raises the auctioneer’ revenues if information is correlated. So “weaker”bidders have at least some chance of victory. An advantaged bidder will probably win a sealed-bid auction.” European Economic Review. bidders can be forced to bid “round”numbers. the “Anglo-Dutch” which ) . These steps make it harder to use bids to signal other buyers. and bids can be made anonymous. The two remaining bidders are then each required to make a …nal sealed-bid o¤er that is not lower than the current asking price.

 .

.

     .

         .

        .

            .

                 .

   .

 .

    .

     .

                .

                .

            .

       .

                    ! .

   .

             .

   .

                    .

      .

 "                         #   .

    .

       .

     .

 $  .

  %                   .

             .

  .

     & '()      '*)       '+)      #    .

     .        .

        .

 .

 #            .

.

          .

$  %      #         .

     .

     .

  .//  -(//         -0// #   .     -.

        -./      ')    ')   1         .

     ( 2.

   .

          .

  ( .

  "                                   (& " 2.

                  & .

.

        .

    .

                   .

                      .

.

.

  .

      '                     ) #                         .

          #        .

         .

          .

 #  .

          .

       .

  .

                 .

      3       .

                   .

   #   .

           .

 .

   4.

          .

    #                  .

       1        .

     .

    .

                    1   .

  .

                 '    *) 5                  .

 .

  .

   .

               .

          * .

  "                                                     *& " 2.

  4.

 4.

 4          .

.

   .

                             .

    .

.

      .

.

              4     .

.

       .

      .

             .

 #            .

.

                 4        .

     .

           .

   .

             1                        .

      .

.

    (             &                                      .

     .

//       #          & '()  '*)  '+)  '6)           #    .              & (//  .

 .

.

     7             .

.

    +    (   *       (                    *(   +&  3   .

2.

 + .

    (   .

           (                 .                              &         5   .

.

  #     .

        %  .

            .

    $ .

 '    6)            4        8 8    .

 .

                 6& $9 2.

 4  .

        .

.

 .

 .

        .

   .

     .

.

     .

     .

.

                          #           .

       .

 #   .

.

 .

.

.

   .

     .

.

                        .

.

       .

              .

  #           '      )        .

.

  # .

  .

 :   .

      .      .

             .

            .

 .

    .

  .

                              .

  '4                .

 .

     .

)                  '            $ )      .

      .

  .

        :     .

          .

  .

    .

    .

       .             .

       .

  .

  6 .

    $  !                (    -(// #       .

   .

          .

) #      .     '   .

    .

   .

      .

& " 2.    '   0) "                                         .

 4        8 8      .

 .

                     0& $9 2.

             .

  .

    +     # .

     (                  1   .

     (         *   4      *           .

 "    *          *     (             #  .

  .

                                     (   *         (                     *(   .    (   .&  3   .

2.

. .

 .

.

  .

  #    $ .

    .

 .

     .

    .

.

      .

          .

 .

.

//                /   .//    (// #      .        .//          5                    .

.

.

             .

.

 5    .

  .

      (// #               .

 .

 .

      .

    .

   $ .

        .

.

          .

      .

       .

           .

                                  .

    .

         .

.

         .

   $ .

    .

 .

  .        +        .

 5        .

         .

       .  .

      < 8    .

 $9 2.    4 4                                <& = .

 #      .

  .

  .

      6        *    .

 #            .

    .  .

 #  .

         .

.

.

 "   .

  .

.

         .

   >    .

    .  .

    ?     .

 .

      8    .

     .

 '      .

   .

  .

  .

  .

         .

  0 .

    )    '      )         .

@                 .

   .

 .

  .

           .

       #    .

                '  )   .

.

      # .

     A          .

  .

  .        (/        .

   B   .

   .

    4          8 8                           A& $9 2.

 44 8         4 4                                      (/& = . $9 2.

 44    .

   .

 .

 .

   .

                          2          .

     .

                                    #     .      C      .

      .

    '       )   .

                    .

.  #     .

(   .

.

                   *             .

    +    .

.

    4                 # .       $                                        6   .              .

                  #         .

               ! " .

   .

        .

   .

       !   " !.

  !.

" .

#    .

  #  .

      .     ( #          * #      + #                  .

    .    .

.

               .   .

   .

 #                          #          .

                       < .

 .

.

 # #        $.

!     .

              !  %   .

                    #     .

   5        .   5              5   .

  .

         .

              .

    #     .

       .

.

                -(// #                        -(//        .          "     .

 .

          .

            .

      $ .

 .

.

                                $  .

   5    .

  .

    .

         -(//                   -(// $   .                 -.//           .

  %  &.

# # .

  4        %    .

.

   .

                   .

          .

  .

       4        .

       .

     .

                 .

    .

     .

   .

 .

  .

 .

 4    .

.

  .

 .

    .

              .

    .

 .

                          #   .

          .

                 .

           .

      .

  .

 .

 '    *) 4  .

    .

             A .

// 1       .  (//  .

              .

             .

      #                        .

  .

             #   5    .

  .

       .

  .

        .

 .

    .

   .

D #          5    .

 .

   .

      .

     .

  $         -(// 1 .

      .

 .

 .

   %     .

  .

 4     .

                .

       .

    .

        .

      $.

    E            .

   "  8        .

. 4 8    .

.

           .

.

    #       .

.

  .

 .

   .

          ((   .   .

       8     E          E              8                  8            8            ((& 3$.

 4  .

                    (/ .

 .

&                       #   5      .

&          4   5    .

8     .

.

   E         $  8   .

 4   F 4       .

        8     5        .

.

!      .

                           8   .

 5     8   .

      G.

H E         8                8                       (*& 3$G.

I      .

E    8           4 8  E                  '5          E    #        5    .

 .

              4     $                 (( .

      .

  .

    .

  %        .

    .

.

  .

 .

 # . .

    .

  .     .  .

  .

.

 % #      .

    .

.

 &        % '         .

 % (            % .

      .

   .

  )   4        .

   .

      .

     .

  $.

    .  ((   .

 &  .

     .

    8   .

 !          .

 2   .

             .

    .

   .

      $ .

   $.

  .

' .

  (*)   .     .

  .

    .

.

 & #               .

    *"          .

   $.

!     #    E3   $.

&       1            .

    .

'      .

)  5    .

      .

          .  1 .

        .          .

        .        .  .

.

        .

       !      .

    .        .    5    .

                     .

    E3                      .         4     .

       .

         4    5    .

.

      .

        .

  E3 .

     .

    .

    .

 .

.

 (* .

        .

  5     E3 B    5                    E3       (+ .  .  +  .

 .

.

             .

  .

  .

            .

                          .

                                 !            "     #    .

  #      $  %     %  #   %   &% % '()*    +   %   .

                             -  %                      .         ..

    (   ()                    . %                $  .

     / +%      / .. %          %        .

  .

0    %                    . %                    +%            / .

    "  .

                                   .

                  % &% ()                      % &% () %       .

.

  .

   .

  .

   .

           .

  1  .

                        .

 .

                     %            %       .

        .

     2    .

    .             .       3      %  #  %                                  .  %                .

.

   $  #      ()               ()           .

 &% %     3  .

       &% % 4     &% %     5       3  %        %                  &% %            (                  % &% %                     '6* '7* '3* '* %   !       ()          %         !  %  &% % +      #     #        .

8 .

                8 .

                            #   ()               .

       .

    .

    .

  .

    .

 .

           .

.

 .

    .

.

  .

     .

           .

 .

    .

     .

   .

 .

                    .

 3 .

! !                          .

  .

.

     "      #  $.

   .

 %&&%  %%'(     )   .

.

   .

  .

   *    +)   .

.

   .

   .   .

       "           .

  .

 .

      .

 +       .

      "        .

               &% '*         %    .

                      .

                   &% '*         #  9      %        .   %      .

      .

                         (      #           %      .        %        .

       #$ %   .

 .

      -  .

 4      %           +   % %     &%  '                    *   %     () 4       ()       %       .

        &  .

     -   .

               -       .

                    .

   .

            .

   .

               .

.

    .

  .

 .

  6 .

.

 '           .    .

  /  % -           %        (    !   &% %             .

  #    %  &% %      %            '           .

  /  0                                       .

                    .

              +    3                           -  .                                <                 =                   :             7         '>* ':* '.* '<* '=* .

   (      >   1                           '1*     (      '* '3*     % &%   #             % &%   #              $            %   4 : $ #     &% %        () (%    !   ()    .

     4 : ..

. &% %    %    %  4   : .

.   %               &% %   .

&% %  %    (       () .   %        %         %                        (   .

        (%  .

        ()       %        &% % %      %         %     4 :   %       &% %     . %       ()         %       %     .

%       %       4 : ( .

- .. %         () (   %          (      % &% ()                6                    (  )*                     2     !     %%  &%   %   ?       +.       .

  % &% ()       :        .

.                                                                      &%                                #      #                        .

   .

            .

     .

          .

       .

        .

 .

   .

 .

      .

      .

 .

     .

      .

 .

 .

     .

                      .

  .

        .

   .

   .

  !  .

        " # $  .

        $  .

       .

 .

        .

 .

   $   .

        .

      .

.

   % $  .

           $ .

 .

      $ .

 .

  $  .

 .

& .

    .

  .

      .

      ' .

     $ .

 $    .

       .

                 .

 .

   .

     $      (     .

 .

  '  .

      .

 .

 )             *     +    .

  .

 .

 .  ( $  .

  .

"     /    .      !-.

  *  .

      .

        .

           .

    .

  .

    .

    .

        /                  *      0 .

 .

    $   .

  .

  .

   .

         0 .

.

  & .

   .

  .

   !$  .

 .

 .

 " .  (  $  .

     *. -.

  .

    .

             1+      .

  .

    .

  *      $             2 .

     .

    .

            .

      .

 .

        .

                  .    -.

      .

 .

                                     / .

            * 3 .

     .

     .

    .

  .

  .

     .

  .

  .

       .

   /    .

    .

         .

       .

 $    .

              .

           .

    4       .

    .

       .

    .

 $              .

   .

  .

    .

  .

   .

  .  ( $  .

 -.        .

  .

  .

   .

                    .

                   .

  .

          .

         .

   .

     .

  $  .

 .

 $         !.

      .

 .

.

 $          .

           .

   " +   .

             .

      *      /   *             .

    .

    .  ( $   .

    .

   *   .

          5 .

.

   .

 .

  .

  *      # $ .

    .

     .

 * .

     .

     .

     .

  $  .

    .

     *      .

    .

    /  .

.

.

     *    .

 $      .

     $ .

     .

                 .

        .

 *   .

  .

               .

   .

   '            .

    .

      .

 ( $ .    .

                      .

            *                  .

   .

 .

  .

       *       .

.

   *   .  ( $   .

       .

    * $ .

       *    .

            .

                               .

  .

.

      $  .

       *    .

       .

    (         +  .

  .

        ' .

 .

                 6                   .

     .

   *     .

       .

  .

      .

    )    .

    *       .

     .

 .

.

.

 .

      .

       .

                              )      .

  .

   .

       -     . -.

        .

  .

     .

    $          .

              ! " .

.

 )    .

 &  .

    *     .

  #    5   .

   3   0    .

   .

 .

                       .

.

               .

                               $   .

  .            0 .

      *                 *  .

 .

.

            .

 $  $   .

   0          !     5     .

          0    .

  " )    .

       .

 *     .

 .

  0    .

 $  .

 .

          *                            #    5    .

   .

             + $ .

   .

 .

   0    .

                  .

 $ .

        0  .

          *                             7 .

 .

                     .

       .

                                        8   .

 &   .

     .

    .

     .

 $       .

$ $     .

    *     .

   .

 *   -.   !    .

     .

     .

                                  .

      #  $ %&   #    .

                                               .

  .

 .

      .

$    ' .

 #-9  .

        )  *     *      :  &          .

 .

   #-9  .

       !"    .

    $         .

       .

   .

       .

    $       .

          $  .

        .

                .

         .

   .

 .

  .

        .

           .

  #-9 13   .

 .

  .

   .

    -  .

   '  )      .

 *               3      * .

 .

   *                      $     ! .

 &   .

  .

    " ! " .

.

 $     .

   + .

  .

 & .

.

 .

     * .

  .

          . .

.

      .

 .

        $  .

 *      5   )     5 .

   *   .

   3   5      .

  $  .

   0  &  .

    $ .

          *      !      5"          .

.

         .

        3 .

       .

     ! .

     &    .

  .

 $ .

       0   $ .

 .

.

        0 .

 $ .

    *          .

    *           $ " .

.

 (( + $ .

$   .

   .

     #-9 .

   * $ .

 .

    33$                         .

           .

    .

   .

$       .

       .

     .

   .

    +  .

    #-9           .

    .

              $  .

   .

      .    0$ .

                   < .

                      .

                              #  $     .

   .

 .

          .

$                     .

                      .

              = .

#         5 .

 .

    #-9 .

     *   .

  .

 $ .

    +       .

         .

      #-9    ) *.

.

.

.

 3  .

.

.

.

  .

     .

  .

         .

     .

   3         $ .

 .

        .

 (     /      .

 .

           #  .

    .

  .

 $ $ .

    .

    .

   .

 .

  .

     .

 + .

  .

 & .

.

 .

 (   .

  *     .

 .

  #    5   .

     3   0   .

    .

         .

 .

         3   0       .

   .

 $ .

 .

                                   .

.

                                   .

 $ .

    *   .

    .

  .

                    3$   ( $    .

 .

                  .

 .

    *      3$ .

$    .

 .

      *        3  $  .

 .

 .

  .

 .

     $ .

               +  .

  .

 &  .

    *   .

    .

  .

 .

  #  .

        .

  .

 .

  3   0   .

.

     .

       *     '                   .

 > .

  .

     .

                       $   .

        .

  .

 .

  .

    .

 .

      .

      .

$ .

        #-9 #   .

  '   .

 -.        .

    .

        .

            .

 ' .

        .

     .

  .

   #-9        $ &   ? & .

  @$ .

 .

 .

             .

 #-9    *    $   .

 .

      .

   $     0AA0      . !  .

 " +           .

 .

 ( $ .

       '  .

  +'  )                  .

   .

     .

                       .

    .

                         !              .

                   "           #       B .

the following game with imperfect. repeated games with incomplete information in which reputation building becomes a concern. 2 R 1. 1 Figure 1: Something Wrong with SPE The strategic form of this game is given by L 1.e. But there 1 . i. 1 O T B It can be easily seen that the set of Nash equilibria of this game is f(T. information is su¢cient. The analysis of extensive form games with incomplete information will show that we need to provide further re…nements of the Nash equilibrium concept.. 0 0. L) . 1 0. this is also the set of SPE. we will see that subgame perfect equilibrium (SPE) concept that we have introduced when we studied extensive form games with complete information is not adequate. such as signalling games. the game itself. 2 0. In particular. 1 0. Many interesting strategic interactions can be modelled in this form. etc. and extensive form games with complete information.Extensive Form Games with Incomplete Information Levent Koçkesen 1 Introduction So far we have analyzed games in strategic form with and without incomplete information. R)g : Since this game has only one subgame. To illustrate the main problem in the SPE concept. 0 0. O 1 b 1. 3 r ¡@ ¡ @ T¡ @B ¡ @ 2 pr p¡ p p p p p p p Ip p p p p p p p@p r ¡p p@ 2 ¢A ¢A ¢ A ¢ A L¢ L¢ AR AR ¢ ¢ A A ¢ ¢ A A r r ¢ ¢ Ar Ar 2. but complete. In this section we will analyze extensive form games with incomplete information. (O. 3 0. however. 3 2. bargaining games with incomplete information.

1 0. Therefore.information that 2 does not have. analyzing player 2’s decision problem at that information set requires him forming beliefs regarding which decision node he is at. 3 r ¡@ ¡ @ T¡ @B ¡ @ 2 pr p¡ p p p p p p p Ip p p p p p p p@p r ¡p p@ 2 ¢A ¢A ¢ A ¢ A L¢ L¢ AR AR ¢ A ¢ A ¢ A ¢ A r r ¢ Ar ¢ Ar 2. We would like players to be rational not only in very subgame but also in every continuation game. The above discussion suggests the direction in which we have to strengthen the SPE concept. if the game ever reaches that information set. Action R is strictly dominated for player 2 at the information set I: Therefore. R) equilibrium. player 1 should play T. given the beliefs and subsequent strategies. and hence it is not a subgame. In general. we require that (1) (Condition 1: Beliefs) At each information set the player who moves at that information set has beliefs over the set of nodes in that information set. In other words. as she would know that player 2 would play L. A continuation game in the above example is composed of the information set I and the nodes that follow from that information set. 0 0. and she would get a payo¤ of 2 which is bigger than the payo¤ that she gets by playing O: Subgame perfect equilibrium cannot capture this. Consider the following modi…cation of the above game. However. rationality of player 2 requires that he plays action L if the game ever reaches there. notice that the continuation game does not start with a single decision node. and (2) (Condition 2: Sequential Rationality) At each information set. 2 Figure 2: Here the optimal action of player 2 at the information set I depends on whether player 1 has played T or B . because it does not test rationality of player 2 at the non-singleton information set I. O 1 b 1. the optimal action at an information set may depend on which node in the information set the play has reached. strategies must be optimal. 1 0.is something implausible about the (O. Condition 1 requires that player 2 assigns beliefs to the two decision nodes at the information set I: Let 2 . player 2 should never play R: Knowing that. First. then. Let us check what these two conditions imply in the game given in Figure 1.

Also let ¹ be the belief assigned to node that follows T in the information set I: If. As an example consider the game given in Figure 2 again. respectively. let Hi be the set of all information sets a player has in the game. Condition 2 requires that in equilibrium player 2 never plays R with positive probability. i.. and B with probabilities ¯ 1 (O) . then player 2 does not obtain any information regarding which one of his decision nodes has been reached from the fact that the play has reached I: The weakest consistency condition that we can impose is then. 2 Perfect Bayesian Equilibrium To be able to de…ne PBE more formally. 1] and the one assigned to the node that follows B be 1 ¡ ¹: Given these beliefs the expected payo¤ to action L is ¹ £ 1 + (1 ¡ ¹) £ 2 = 2 ¡ ¹ whereas the expected payo¤ to R is ¹ £ 0 + (1 ¡ ¹) £ 1 = 1 ¡ ¹: Notice that 2 ¡ ¹ > 1 ¡ ¹ for any ¹ 2 [0. condition 1. R) . was implausible. does not specify how these beliefs are formed. These three conditions de…ne the equilibrium concept Perfect Bayesian Equilibrium (PBE). As we are after an equilibrium concept.the probability assigned to the node that follows T be ¹ 2 [0. we argued. Although it requires players to form beliefs at non-singleton information sets. we may apply Bayes’ Rule. X ¯ i (a) = 1: a2A(h) 3 . (3) (Condition 3: Weak Consistency) Beliefs are determined by Bayes’ Rule and strategies whenever possible. i. this requires that ¯ 1 (T ) + ¯ 1 (B) 6= 0: If ¯ 1 (T ) + ¯ 1 (B) = 0. Suppose player 1 plays actions O. then we have a clear inconsistency between player 1’s strategy and player 2’s beliefs.e. The only consistent belief in this case would be ¹ = 1: In general. whenever possible. and let A (h) be the set of actions available at information set h: A behavioral strategy for player i is a function ¯ i which assigns to each information set h 2 Hi a probability distribution on A (h) . T. player 1 plays action O with probability 1. 1] : Therefore. ¯ 1 (T ) = 1 and ¹ = 0. to achieve consistency: ¯ 1 (T ) : ¹= ¯ 1 (T ) + ¯ 1 (B) Of course. ¯ 1 (T ) . This eliminates the subgame perfect equilibrium (O. we require the beliefs to be consistent with the players’ strategies. and ¯ 1 (B) ..e. for example. which.

1 4 . to T is 2¯ 2 (L) . Therefore. B = £i Bi : A belief system ¹ : X ! [0. ¹ = 1: (ii) ¯ 2 (L) = 0: Sequential rationality of player 2 now implies that ¹ · 1=2. Let ¯ i (a) be the probability assigned to action a by player i. ¯ 1 (T ) = 1: Bayes’ rule then implies that ¹= ¯ 1 (T ) 1 = = 1. or (iii) ¯ 2 (L) 2 (0. ¯) that satisfy conditions 1-3.e. “Perfect Bayesian and Sequential Equilibrium. ¯ 1 (T ) + ¯ 1 (B) 1+0 which is greater than 1/2. sequential rationality of player 2 implies that the expected payo¤ to L is greater than or equal to the expected payo¤ to R.1 To illustrate. and hence does not contradict player 2’s sequential rationality. where x2h ¹ (x) = 1 for all h 2 H: Let M be the set of all belief systems. 236-60. i. ¯ 2 (L) = 1. i. Therefore. (ii) ¯ 2 (L) = 0. i.e.” Journal of Economic Theory. Tirole (1991). 53. An assessment (¹. and to B is 0: Clearly. the following assessment is a PBE ¯ 1 (T ) = 1.. however..Let Bi be the set of all behavioral strategies available for player i and B be the set of all behavioral strategy pro…les. 1) : Let us check each of the possibilities in turn: (i) ¯ 2 (L) = 1: In this case. and ¹ be the belief assigned to the node that follows T in information set I: In any PBE of this game we have (i) ¯ 2 (L) = 1.e. we cannot apply Bayes’ rule. Fudenberg and J. there is a continuum of equilibria of the form ¯ 1 (O) = 1. 1] assigns to each P decision node x in the information set h a probability ¹ (x). player 1 will never play B Perfect Bayesian equilibrium. and hence condition 3 is trivially satis…ed. consider the game in Figure 2 again. 1) : Sequential rationality of player 2 implies that ¹ = 1=2: For player 1 the expected payo¤ to O is 1. ¹ £ 1 + (1 ¡ ¹) £ 1 ¸ ¹ £ 0 + (1 ¡ ¹) £ 2 or 1 ¸ 2 ¡ 2¹ () ¹ ¸ 1=2: Sequential rationality of player 1 on the other hand implies that she plays T. puts more restrictions on the out-ofequilibrium beliefs and hence is stronger than the de…nition provided here. ¯) 2 M £ B is a belief system combined with a behavioral strategy pro…le. ¹ · 1=2: (iii) ¯ 2 (L) 2 (0.. as it was de…ned in D. ¯ 2 (L) = 0. Perfect Bayesian equilibrium is an assessment (¹. and sequential rationality of player 1 implies that ¯ 1 (O) = 1: Since ¯ 1 (T ) + ¯ 1 (B) = 0.

Therefore. 1. contradicting ¹ = 1=2: If ¯ 1 (0) 2 (0. because it does not put enough restrictions on out-of-equilibrium beliefs. this is not a plausible outcome. that is in this case we always have ¯ 1 (B) = 0. L0 ) together with the belief system that puts probability 1 to the node that follows R is an assessment that satis…es conditions 1-3. The unique subgame perfect equilibrium of this game is (D. 1. on the other hand. ¯ 1 (O) = 0. again contradicting ¹ = 1=2: Perfect Bayesian Equilibrium could be considered a weak equilibrium concept. 1 b ¡@ ¡ @ A¡ @D ¡ @ ¡ @2 ¡ @r r ¡@ 2. ¯) such that ¹n is derived from ¯ n using Bayes’ rule for all n: 5 . L0 ) is not a Nash equilibrium of the subgame that starts with player 2’s move. any assessment that has ¯ 1 (O) = 1. 3. however. 0 < ¯ 2 (L) · 1=2. then we must have ¯ 1 (T ) = 1. ¯ n ) that converges to (¹. 3 0. 0 ¡ @ L¡ @R ¡ @ ¡ 3 p p p p p p p p p p p p p p p p p@ p 3 pr p @r ¡ ¢A ¢A 0 ¢ A 0 0 ¢ A 0 L¢ L¢ AR AR ¢ A ¢ A ¢ A ¢ A r r ¢ Ar ¢ Ar 1. and we cannot apply Bayes’ rule. R0 ) : However. ¯) is consistent if there exists a completely mixed sequence (¹n .with positive probability. then Bayes’ rule implies that ¹ = 1. L. but since player 3’s information set is o¤-the-equilibrium. then we must have 2¯ 2 (L) · 1 () ¯ 2 (L) · 1=2. and Bayes’ rule implies that ¹ = 1. 2 0. ¹ = 1=2 is a PBE. 1) . Also. Bayes’ rule has no bite there. the strategy pro…le (A. If. ¯ 1 (O) = 1. L. Consider the three-player game given in Figure 3. Clearly. De…nition (Consistency): An assessment (¹. as (L. 0. 2. notice that player 3’s beliefs are not consistent with player 2’s strategy. we have to de…ne a particular consistency notion. 1 Figure 3: PBE may have "unreasonable" beliefs The most commonly used equilibrium concept that do not su¤er from such de…ciencies is that of sequential equilibrium. If. Before we can de…ne sequential equilibrium. A behavioral strategy pro…le is said to be completely mixed if every action receives positive probability. 1 3.

in a signalling game there are two players. P XX µ a ¹ (µjm) ¯ R (ajm) uR (m. ¹ = 0) : For this to be a sequential equilibrium.9) or Weak (W ) 6 ¯ (m0 jµ0 ) p (µ0 ) ¹ (µ0 jm0 ) = P S . Also let ¯ S (mjµ) denote the probability that type µ sender sends message m. a. ¹ = 1) is easily checked to satisfy sequential rationality. 3 Signalling Games One of the most common applications in economics of extensive form games with incomplete information is signalling games. L. i. Let ¹ be the probability assigned to the node that follows L. ¹n = 1 ¡ : 3 n n n n Notice that ¹n is derived from ¯ n via Bayes’ rule and (¹n . ¯) : Therefore. R0 ) . ¹n = n ! 0. the expected payo¤ of a sender of type µ is then US (¹. ¯ n (L0 ) ! 1. µ) and uR (m. ¯). ¯. To check consistency. and ¯ R (ajm) denote the probability that the receiver chooses action a after observing message m: Given an assessment (¹. L0 ) . µ) : whenever µ ¯ S (m0 jµ) p (µ) 6= 0. ¯ n (L) ! 1. we have to …nd a completely mixed behavioral strategy pro…le ¯ n such that ¯ n (L) 2 ¯ n (A) ! 1. whereas the expected payo¤ of the receiver conditional upon receiving message m is UR (¹. L. µ) : Let ¹ (µjm) denote the receiver’s belief that the sender’s type is µ if message m is observed. this assessment is a sequential equilibrium. ¯jm) = Also. ¯ n ) ! (¹. ¯) is a sequential equilibrium if it is sequentially rational and consistent. known as Beer or Quiche. whose typical element will be denoted µ: The probability of type µ being drawn is p (µ) : Sender observes his type and chooses a message m 2 M: The receiver observes m (but not µ) and chooses an action a 2 A: The payo¤s are given by uS (m. a. a. the assessment given by ((D. R: Nature draws the type of the sender from a type set £.. and consider the assessment ((A. µ) . However. In its simplest form. ¯ n (R0 ) = 1 ¡ . a. ¯ 2 (L) = 1 ¡ . µ) = XX m a ¯ S (mjµ) ¯ R (ajm) uS (m. let ¯ n (D) = 1 ¡ 1 1 n 1 1 1 . 0 µ ¯ S (m jµ) p (µ) . To illustrate. at least one type of sender sends the message m0 : To illustrate consider the game in …gure 4. In this game Nature (N) chooses the type of player 1 to be Tough (T ) (with probability 0. Bayes’ rule implies.An assessment (¹. and a receiver. a sender S. 1 2 3 ¯ 2 (L) + ¯ n (R) 2 which is not possible.e. consider the game in Figure 3 again.

then. (b) Weak chooses beer. 0 p p @ @ F pp @ p @ pp @pr ¡ ¡ ¡A r¡ ¡ r 0. 1 Figure 4: Beer or Quiche Let us …nd the pure strategy PBE of this game. Tough chooses beer (¯ S (QjW ) = 1. ¹ (T jB) = 1: Therefore. Each type chooses a di¤erent action (Separating Equilibria): (a) Weak chooses quiche. there is no PBE of this type. 0 p T p ¡ ¡ p F p ¡ p p ¡ p r ¡ pr 1 B @ @ A@ @r @ 3. the receiver’s sequential rationality implies that ¯ R (F jB) = 1 and ¯ R (AjQ) = 1: Sequential rationality of the sender. ¯ S (QjT ) = 0) : Bayes’ rule implies that ¹ (W jQ) = ¯ S (QjW ) p (W ) 1 £ 0:1 = = 1: ¯ S (QjW ) p (W ) + ¯ S (QjT ) p (T ) 1 £ 0:1 + 0 £ 0:9 Similarly. Player 1 observes her type and chooses Quiche (Q) or Beer (B) : Player 2 observes only the action choice of player 1 but not the type. the receiver’s sequential rationality implies that ¯ R (AjB) = 1 and ¯ R (F jQ) = 1: Sequential rationality of the sender. 0 p W p p p p p p bN 2 pp p p p p p p r 1. ¯ S (QjT ) = 1) : Bayes’ rule implies that ¹ (T jQ) = 1 and ¹ (W jB) = 1: Therefore. and chooses to …ght (F ) or not to …ght (A): 1. 7 .1). contradicting our hypothesis. 1 r @ @ @F @ @r ¡pp ¡ pp ¡ A pp p ¡ r 3. So.(with probability 0. 1 ¡ ¡ F¡ ¡ 1 B r r ¡ p@ p p @ p p A@ p p @r @ 2. implies that ¯ S (QjW ) = 0. Tough chooses quiche (¯ S (QjW ) = 0. there is no PBE of this type either. There are four types of possible equilibria: 1. 1 Q Q 2. So. 0 ¡ p p p p p p p 2 pp p p p p p p r 0. implies that ¯ S (QjW ) = 1. contradicting our hypothesis. then.

¯ R (AjQ) = 1. ¯ S (BjT ) = 1. any assessment which satis…es the following is a PBE: ¯ S (QjW ) = 1. Spence (1973). the receiver’s expected payo¤ to F is 0:1 £ 1 + 0:9 £ 0 = 0:1 and expected payo¤ to A is 0:1 £ 0 + 0:9 £ 2 = 1:8. we must have ¯ R (F jB) = 1. We let the probability of having high ability be denoted by p 2 (0. and hence sequential rationality implies that ¯ R (AjQ) = 1: The weak type’s sequential rationality implies. then. the employer o¤ers a wage schedule w (e) as a function of education. Both types choose the same action (Pooling Equilibria) (a) Both choose quiche (¯ S (QjW ) = 1. ¯ R (F jB) = 1. that ¯ S (QjW ) = 1. 8 . 355-74. con…rming our hypothesis. not the ability.2. 87. However.” Quarterly Journal of Economics. the cost of having level of education e is e for the low ability worker and e=2 for the high ability worker. ¯ S (QjT ) = 1. ¯ R (AjB) = 1. ¯ S (BjT ) = 1) : It is easily checked that the following constitute the set of PBE of this type: ¯ S (BjW ) = 1. e. L) = w ¡ e: 2 Based on M. The worker can choose a level of education e ¸ 0 before applying for a job. ¹ (W jQ) = 0:1. ¯ R (F jQ) = 1. ¹ (W jB) ¸ 1=2: (b) Both choose beer (¯ S (BjW ) = 1. ¹ (W jB) = 0:1. 1) : The output is equal to 2 if the worker is of high ability and equal to 1 if he is of low ability. Therefore. Therefore. a high ability (H) and a low ability (L) type. The worker knows his ability but the employer observes only the level of education. ¹ (W jQ) ¸ 1=2: Job Market Signalling2 Suppose there are two types of workers. “Job Market Signalling. which in turn requires that ¹ (W jB) ¸ 1=2: Therefore. e. For the tough type. The payo¤s of the workers are given by u (w. H) = w ¡ e=2. ¯ S (QjT ) = 1) : Bayes’ rule implies that ¹ (W jQ) = 0:1 and ¹ (T jQ) = 0:9: Therefore. u (w. playing quiche would be rational only if the receiver chooses to …ght after observing beer. after observing Q.

Pooling Equilibria (eH = eL = e¤ ): The Bayes’ rule in this case implies that ¹ (Hje¤ ) = p and ¹ (Lje¤ ) = 1 ¡ p: Therefore. e < e H ¹ (Hje) = : : 1. the wage schedule will satisfy w (e) = 2¹ (Hje) + (1 ¡ ¹ (Hje)) : We are interested in the set of PBE of this game. u (w. for all e ¸ 0: The above inequalities are satis…ed if and only if e¤ · p: We can. e = e¤ ¹ (Hje) = : 0. e¤ . Therefore. e¤ . respectively. if ¹ (Hje) denotes the belief of the employer that the worker is of high ability given that he has chosen education level e. it must be such that the low ability worker does not want to mimic the high ability worker and vice versa. e ¸ eH 2. Let eH and eL denote the education levels chosen by the high and low ability workers. Therefore.We assume that the job market is competitive and hence the employer o¤ers a wage schedule w (e) such that the expected pro…t is equal zero. the low ability worker will choose e = 0: In equilibrium. e 6= e¤ It must be the case that p + 1 ¡ e¤ ¸ 0: 9 . Separating Equilibria (eH 6= eL ): The Bayes’ rule in this case implies that ¹ (HjeH ) = 1 and ¹ (LjeL ) = 1: Therefore. in turn. w (e¤ ) = 2p + (1 ¡ p) : = p + 1 and hence u (w. p + 1 ¡ e¤ ¸ w (e) ¡ e. show that any such e¤ can be supported as an equilibrium by the following belief system 8 < p. H) = p + 1 ¡ e¤ =2. we need to have eH 2¡ ¸1 2 or eH · 2 and 1 ¸ 2 ¡ eH or 1 · eH : We can support any eH between 1 and 2 with the following belief system 8 < 0. we have w (eH ) = 2 and w (eL ) = 1: Given that. L) = p + 1 ¡ e¤ : p + 1 ¡ e¤ =2 ¸ 0 We also need to have p + 1 ¡ e¤ =2 ¸ w (e) ¡ e=2. 1.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.