You are on page 1of 43

Matching, Auctions, and Market Design

Matthew O. Jackson
Department of Economics, Stanford University.
Draft: December 2013

Abstract
I provide a brief introduction to the early literatures on Matching, Auctions, and
Market Design.
Keywords: Matching, Assignment, Auctions, Market Design

Excerpts from this manuscript, with some additions and modifications, were published as Economic

Engineering and the Design of Matching Markets: the Contributions of Alvin E. Roth, in the Scandinavian
Journal of Economics, Vol. 115, Iss. 3, 619639, 2013. Thank you to Ben Golub, Sara Jackson, Stephen
Nei, Al Roth, and Yiqing Xing for comments.

Electronic copy available at: http://ssrn.com/abstract=2263502

Introduction

The design of matching markets and auctions has brought economic theory and practice
together. Indeed, this is an area where microeconomic theory has had its largest direct
impact. This is in part because it focuses on settings where people interact according to
very clearly delineated rules. Thus, the strategic interactions of the participants are relatively
easy to model, and outcomes are comparatively straightforward to assess relative to the many
other less structured interactions that occur in an economy. In view of this, these are arenas
where economic theory has directly shaped institutions ranging from the systems by which
students are assigned to public schools to the manner in which governments have auctioned
off the rights to parts of the broadcast spectra in particular geographic regions.
In this overview, I discuss some of the main theoretical insights that have been obtained in
these areas, as well as how these were motivated by application and how they have influenced
practice. In doing this, I will treat these areas separately. This is despite the fact that there
is a strong conceptual relationship between matching markets and auctions. Both involve
settings concerning the allocation of some objects (or in some cases even people) among a
set of economic agents. In many applications the set of objects to be allocated is finite and
discrete, and so there is some overlap in the tools of analysis. Nonetheless, the literatures
have developed largely independently, in part because the specifics of the applications are
sufficiently distinct to require specific analyses that have not led to many common conclusions
across the two literatures. As one rough distinction, the matching literature often considers
allocations where assignments are made without transfers of money or other divisible goods
(although there are some important exceptions), while the auctions literature focuses on
allocations of objects in exchange for some payments. These literatures are far too vast to
even begin to survey in a short piece, and so I focus on the early roots and insights that
emerged in these literatures and some of the most important ties and feedback between
theory and practice.

Matching and the Allocation of Indivisible Goods

There is a wide variety of matching markets, with prominent examples being the assignment
of medical students to hospitals, the assignment of students to public schools, the assignment
of available organs to transplant patients, and the employment of workers by firms. The
2

Electronic copy available at: http://ssrn.com/abstract=2263502

analysis of these subjects and recent designs of some institutions have made heavy use of
economic theory. The theory may be divided into two primary types of settings. The
settings differ in terms of whether economic agents are to be matched with each other (as
in the assignment of students to hospitals, where both the students and the hospitals care
about with whom they are matched) or to be matched with objects (for instance patients to
organs, where the patients have preferences but the organs do not). I describe some of the
early theoretical developments in both of these settings in turn, before returning to discuss
the applications and interaction between theory and practice.

2.1

Two-Sided Matching and Marriage Markets

The basic matching paradigm is what is known as two-sided matching, also called marriage markets, and refers to a bipartite matching setting. In its simplest formulation, there
are two sets of economic agents who are to be matched to each other. The name marriage
markets refers to the obvious case where there is a set of women and a set of men,
and where a matching is a list of which pairs of women and men are to be married to each
other (with the rest staying single). The canonical formulation is such that each woman
has a strict ranking over men, and each man has a strict ranking over women, which can
be thought of as representing their preferences. In addition, each agent has some threshold
at which they would prefer to remain single to being matched to anyone below. Women
can only be matched to men (or remain single), and vice versa. The main issue is to find
a matching that has a number of desiderata, such as being stable against agents trying to
rearrange their matchings.
There are two key milestones in the early matching literature. The first is the paper by
Gale and Shapley (1962) that provided the backbone for much of the theory that followed,
including the matching and discrete allocation literatures today. The second was an observation by Roth (1984) that the National Resident Matching Program, through which medical
students are matched to hospitals for their residency programs, used the deferred acceptance
algorithm that was shown by Gale and Shapley (1962) to select a matching with desirable
properties.
These milestones also correspond with the major strands of the literature that have interacted in a very healthy way. One strand comes from the theoretical foundations of matching
and the allocation of indivisible goods. The other comes from very direct applications of the
3

theories and the very practical issues that they entail. Before discussing how this became
one of the most important examples of economic engineering through the applications recognized and investigated by Roth in the early 1980s, let us start by briefly reviewing the
foundations laid by Gale and Shapley in the early 1960s.
2.1.1

Foundational Theory

There are two major contributions made in the relatively short but elegant paper by Gale
and Shapley (1962). The first is the formulation of the problem and the definition of stability.
The second is the specification of the deferred acceptance algorithm and the demonstration
that it always results in a stable match.1
Let me begin with a simple example that illustrates the definitions and some of the basic
results.
Consider an example of a society with three women and three men.2 In this example, all
of the agents would prefer to be in some match than to remain single. The preferences of
the agents are described as follows:
Table 1: Womens Preference Rankings
W1 W2 W3
M1

M3

M1

M3

M1

M3

M2

M2

M2

So, from these tables we see that W1 (woman 1) prefers M1 over M3 over M2, and so
forth.
A match is a list of a partner for each agent, such that3
1

Quoting from Roth (2008a): At his birthday celebration in Stony Brook on 12 July 2007, David Gale

related the story of his collaboration with Shapley to produce GS by saying that he (Gale) had proposed
the model and definition of stability, and had sent to a number of colleagues the conjecture that a stable
matching always existed. By return mail, Shapley proposed the deferred acceptance algorithm and the
corresponding proof.
2
3

This example is from Moulin (1995, page 113).


I will keep the discussion at a relatively nontechnical level. More formal definitions are given in the

original papers, as well as many of the surveys listed in the references.

Table 2: Mens Preference Rankings


M1

M2

M3

W2 W1 W1
W1 W3 W2
W3 W2 W3

the partner of any woman is either a man, or the woman herself (meaning that she
would then be left single),
the partner of any man is either a woman, or the man himself,
if some woman is matched to some man, then that man is matched with that woman.
Gale and Shapley (1962) defined a match to be stable if no agent would prefer to remain
single over his or her current match, and no pair of agents would each prefer to be matched
with each other rather than stay with their current matches.4
In this example, it is easy to check that there are two stable matches: { (M1,W1),
(M2,W3), (M3,W2) } and { (M1,W2), (M2,W3), (M3,W1) }.
There are other matches that are feasible, but not stable. For example, there is no stable
match where M1 and W3 are paired together. To see this note that M1 would prefer to
be matched with W1 over W3, and that W1 finds M1 to be most preferred. Thus, if we
attempted to pair M1 and W3, then M1 and W2 could block the matching, as each of them
would prefer to be matched to each other compared to whomever they would be matched
with in a match where M1 was matched with W3.
Thus, a match is stable if there are no pairwise deviations that would benefit both of
the agents in the pair, nor any agent who would rather be single than stay in the match.
4

Even though this definition only considers blocking by one agent (choosing to remain single) or pairs

of agents (choosing each other rather than the prescribed match), it is fairly easy to see that the definition
turns out to be equivalent to one also allowing for blocking by larger groups of agents who collectively
rearrange their matches to all end up better off (i.e., where the unblocked outcomes are known as the
core). Effectively, if there is some blocking by a larger group of agents, then within that blocking is
some pair, or some single individual, who could block; and so allowing for larger blocking groups does not
destabilize any matchings that are not already destabilized by considering individual and pairwise deviations.
For more detail on that issue, see Roth and Sotomayor (1990, Chapter 3).

Part of the idea behind this is that a match that is not stable would be vulnerable to being
undermined by deviating agents in any setting such that agents are aware of their options
and provided agents cannot be forced to abide by an imposed matching.
This example illustrates a number of things about the set of stable matchings. First,
it shows that there may be more than one stable matching. Second, it illustrates that not
all matches are stable. Third, it also illustrates the cleverness of the deferred acceptance
algorithm of Gale and Shapley (1962) for finding a stable matching. With regard to this
third point, let us see why one needs a nontrivial algorithm to find a stable matching.
Suppose that we simply try what is perhaps the most obvious algorithm for looking
for a stable matching. We begin with some matching, say the notationally obvious one {
(M1,W1), (M2,W2), (M3,W3) }. We check whether it is stable. If it is blocked by some pair,
then we rearrange the matching in that way. Here, this is blocked by the pair (M1,W2) who
would each prefer to be matched to each other than to their current match.5 So, we match
them and switch their partners to get { (M1,W2), (M2,W1), (M3,W3) }. Now, this match is
not stable either as it is blocked by (M3, W2) (among others). So we switch their partners
and get { (M1,W3), (M2,W1), (M3,W2) }. This is blocked by (M3,W1), so we switch their
partners and get { (M1,W3), (M2,W2), (M3,W1) }. This is blocked by (M1,W1) and so we
are back to { (M1,W1), (M2,W2), (M3,W3) }. We have come in a full circle back to the
starting point without finding a stable matching.
Thus, in order to identify a stable matching, one needs a more careful algorithm. One
of the key contributions of Gale and Shapley (1962), beyond defining stability, was to show
that a stable match always exists and that it can be found by a relatively simple and clever
algorithm.
The deferred acceptance algorithm works as follows. It can be stated in two ways, either
with men proposing or women proposing. It is stated here with men proposing. Let
us say that a man finds a woman acceptable if he prefers to be matched with her to staying
single, and similarly for women.
Each man proposes to his most preferred woman provided that she is acceptable, and
otherwise proposes to be single.
Each woman selects the most preferred acceptable man who has proposed to her. If
5

It is blocked in other ways too, for instance by (M3,W2). Which choice we make in proceeding here

makes a difference.

there is such a man, then that woman and man become engaged, and if the woman
has not received any proposals from acceptable men then she remains single.
Each man that is not currently engaged proposes to his most preferred acceptable
woman to whom he has not yet proposed, and if there are no such acceptable women,
does not make any proposal.
Each woman selects the most preferred acceptable man out of her previous fiance and
those from whom she has received new proposals. If there is such an acceptable man,
then that woman and man become engaged and otherwise the woman remains single.
We keep repeating this process by which the men who are not engaged propose to the
highest acceptable woman on their lists to whom they have not yet proposed, until all
men have either run out of acceptable women or are engaged.
The resulting match is the ending set of engagements, with the un-engaged agents
staying single.
To illustrate the algorithm in the context of the example, at the first round M1 proposes
to W2 while M2 and M3 both propose to W1. The first round engagements are then (M1,W2)
and (M3,W1), while M2 is rejected. M2 then proposes to W3, and is accepted, and thus we
end at the stable matching of { (M1,W2), (M2,W3), (M3,W1) }.
If we reversed the roles of women and men in the algorithm, we would actually find the
other stable matching of { (M1,W1), (M2,W3), (M3,W2) }.
This relates to another interesting property of the deferred acceptance algorithm that was
shown by Gale and Shapley (1962). When the men propose, the resulting stable matching is
the most preferred by men out of all of the possible stable matchings. This holds in a strong
sense: each man (weakly) prefers his mate in this matching to the mate he would have in
any other stable matching, and the women find this matching least preferred: each woman
finds her mate the least attractive out of those she might be matched to out of all of the
stable matchings. If we reverse the roles in the algorithm, then we find the stable matching
that is most preferred by women and least preferred by men.
There are other interesting properties of stable matchings: for example, they have the
nice mathematical structure of forming a lattice under the preference partial-ordering of the

women (or of the men).6 This is discussed by Knuth (1976) (who attributes the observation
to Conway in private communication).
To summarize, in this two-sided matching setting where agents have preferences over
whom they are matched with, there always exists at least one stable matching and one can
be found by the deferred acceptance algorithm. Moreover, there may be multiple stable
matchings that form a lattice structure, and the matching that is (uniformly) most preferred
by one particular side of the market can be found by having them be the proposers in the
algorithm.
With some of the basic background on the foundations in hand, let us now discuss
applications and some of the further issues that are raised.
2.1.2

Applications of Two-Sided Matching

As we saw through the example in the previous section, one needs to resort to some particular
techniques in order to identify a stable matching. By blindly following blocking pairs, one
might end up aimlessly cycling. In view of this, if a society is left to search for a matching in
a completely decentralized way it is unclear (and perhaps unlikely) that a stable match will
be found, especially in settings with large numbers of agents.7 Having some system through
which matchings are identified can be critical for reaching stability, and moreover, such a
system will have to rely on some nontrivial algorithm and coordination to reach a stable
matching.
This leads us to the next critical contribution in this literature. Roth (1984) studied the
National Resident Matching Program, through which medical students (doctors) in the
U.S. are matched to hospitals for internships called residencies after completing their initial
studies in order to gain expertise in their specialties. This is a variation on the two-sided
matching problem, since hospitals may have many openings, while students wish to match
to just one hospital. This is sometimes referred to as the college admissions problem, and
was described in the original Gale and Shapley (1962) paper as it is a variation on the simple
setting above. It maps well to that setting, as long as each opening at a given hospital can
6

A lattice is a set that has a nice (partial) ordering property. In this case, for any two matchings, there

exists a third (possibly the same as one of the two) that every man weakly prefers to either of the first two,
and also one that is weakly worse for all men. This also holds relative to womens preferences, but with the
reverse ranking.
7

There are some dynamics that can lead to stable matchings, as discussed in Roth and Vande Vate (1990).

be treated separately, with its preferences unaffected by how the other openings are filled.
In particular, Roth noted that starting in 1951, the matching of doctors to hospitals was
centralized and worked precisely through the use of the deferred acceptance algorithm! His
article goes further, to provide a convincing discussion of the evolution of that market and
how it came to adopt the deferred acceptance algorithm. In short, Roth argued that the
centralized matching procedure was put in place in response to the chaos, and in particular
to a sort of unraveling, that was occurring in the market prior to 1951.
The scenario that Roth (1984) discusses, well-informed by theory, is a rich one. Before
there was a matching program, hospitals and students were each struggling to obtain the best
match that they could. Without any centralized procedure and in a large world, students
might not have much of an idea about the preferences of hospitals, much less the other
students identities or preferences, and so forth. As one might expect given some of the
analysis above, trying to find the best match for oneself and then locking into it at an early
stage becomes a prevalent strategy, and indeed Roth reports that offers were made earlier
and earlier in a medical students studies, with short deadlines in attempts to finalize the
match. Along with this, agreements would be reached and then broken when a better match
appeared. Generally the market was quite chaotic and inefficient on many dimensions with
decisions being made with insufficient information, unraveling occurring over time so that
offers were made earlier and earlier in careers, suboptimal matches realized, and excess effort
spent on the process by many parties.
As Roth (1984) describes it, in an attempt to counter the chaos, a major medical society
introduced the National Resident Matching Program on a purely voluntary basis. And yet
shortly after its introduction around ninety five percent of matches were made through the
program and the deferred acceptance algorithm in particular. This was quite successful for
many years, leading to stable matches. One might wonder if this were simply due to the
introduction of some algorithm rather than the specifics of the deferred acceptance algorithm.
Roths (1990, 1991) further studies of programs in the U.K. provided interesting contrasts in
outcomes, as different regions had adopted different algorithms. Those adopting the deferred
acceptance algorithm saw stability and kept their algorithms in place, and those that tried
other algorithms saw instability and abandoned their algorithms.
The hypothesis that the stability of the deferred acceptance algorithm would lead to a
market that would not unravel, while the use of other unstable algorithms would be ac-

companied by unraveling and matches undertaken outside the mechanism, was confirmed
in laboratory experiments by Kagel and Roth (2000) who contrasted two of the mechanisms from the U.K. regional markets and found such unraveling with an unstable matching
mechanism but not with deferred acceptance. Indeed, the combination of empirical and
experimental methods have become increasingly important inputs into the design of mechanisms and diagnostic tools of their failures (e.g., McKinney, Niederle and Roth (2005)). For
example, Niederle and Roths (2003b) study of the gastroenterology market was helpful in
the re-adoption of the deferred acceptance algorithm, which appears successful (Niederle,
Proctor and Roth (2008)).
As initially noted by Roth (1984), and discussed in further detail in Roth and Sotomayor
(1990), the example of the National Resident Matching Program raises a number of interesting other questions for the theory, and has led to a variety of further explorations. As one
example of further issues, beginning in the 1970s, more matches began to appear outside of
the program. This reflected the fact that many residents were married to other residents,
and so had some preference to end up in the same location. In the description of the theory
above, it is (implicitly) assumed that each agent can list a ranking over potential matches
that does not depend on the rest of the matching. This is no longer true with residents
who want to be matched in close proximity to their spouses. Moreover, there are many
other reasons for people to care about more than just the outcomes of their own matches.
For instance, it may be that hospitals have several possible openings. One way to handle a
hospital having several positions is to treat the hospital as several different agents in the system, each having identical preferences. This works fine as long as two things are true. First,
the hospital does not care about the mix of people it obtains, instead just wanting the best
people from its list. Second, residents do not care about which other residents fill the other
openings at the hospital to which they are matched. It is easy to think of reasons for both of
these conditions to be violated, at least under some circumstances. For instance, a hospital
might want a variety of skills and interests in the residents it selects, and some residents
might be viewed as substitutes for each other. On the other side of the market, it might not
be that a resident cares only about being at the best hospital, but also cares about being
together with the best residents. This substantially complicates the preference structure
and leads to many interesting issues in finding stable matches.8
8

Some such issues with externalities in preferences were investigated by Kelso and Crawford (1982) and

10

As another example of further questions that arise from such an application, one can ask
to what extent students and hospitals have incentives to reveal their true rankings. Could
they obtain a better match if they lied in submitting their rankings? Perhaps intuitively,
under the deferred acceptance algorithm it is a dominant strategy for the proposers (the men
in the version above) to list their preferences truthfully, but there are instances where the
other side of the market (the women in the version above) can gain by lying. This was shown
by Dubins and Freedman (1981) and Roth (1982a). We can see this in the context of the
example above. Recall that under the deferred acceptance algorithm at the first round M1
proposes to W2 while M2 and M3 both propose to W1. The first round engagements are then
(M1,W2) and (M3,W1), while M2 is rejected. M2 then proposes to W3, and is accepted, and
we end at the stable matching of { (M1,W2), (M2,W3), (M3,W1) }. Suppose that instead of
being truthful, W2 claims that only M3 is acceptable. This changes the outcome in the first
round, as effectively W2 now rejects M1s proposal. With this manipulation, the only first
round engagement is (M3,W1) and so then at the second round M1 now proposes to W1
and M2 to W3. Then W1 will break her engagement with M3 and accept M1, and it is easy
to check that we then end up with the stable matching of { (M1,W1), (M2,W3), (M3,W2)
} which is better for W2 who now ends up with her most preferred match.
Given that the deferred acceptance algorithm is manipulable, one might wonder if there
is some other mechanism for obtaining stable matchings that is non-manipulable. It turns
out that there is no mechanism that always results in a stable matching and also is such that
it is a dominant strategy for everyone to reveal their rankings truthfully (so that there are no
profiles of preferences such that some agent could gain by manipulating his or her announced
ranking). This is another critical observation in the literature, and was first shown by Roth
(1982a), and eventually in more detailed forms by Roth and Rothblum (1999) and Sonmez
(1997, 1999).9 The recognition that non-manipulability and stability are incompatible has led
to much of the more recent follow-up literature. To understand why, note that once agents
Roth (1985), and subsequently a number of others. More generally, one then runs into a many to many
matching problem for which the existence of stable matchings is a more problematic issue (e.g., Dreze and
Greenberg (1980), Banerjee, Konishi, and Sonmez (2001), and Bogomolnaia and Jackson (2002)). With
some conditions on the preferences, some forms of stability can be ensured (see Blair (1988), Echenique and
Oviedo (2006), Hatfield and Kominers (2011)).
9
Roth (1989) also showed that this result can be strengthened to show that no (direct) mechanism has
Bayesian Nash equilibria that always results in stable matchings.

11

incentives to report their rankings are taken into account, the outcomes of the mechanism
can change, and hence possibly lead to unstable and inefficient outcomes. That is, when
Gale and Shapley (1962) analyzed the deferred acceptance algorithm, they assumed that the
agents making their choices at each point act in accordance with their true rankings and do
not try to manipulate the outcome of the algorithm. However, that algorithm, and moreover
any algorithm that always finds a stable matching under truthfulness, is manipulable in at
least some instances. Thus, if one analyzes matching algorithms with manipulation in mind,
then the outcomes can change. In that light, one must then revisit the question of what a
good mechanism is.
This important realization that all mechanisms that lead to stable matchings are manipulable then led researchers to examine a variety of mechanisms more closely. This, for example,
helped prompt experimental analyses of various mechanisms, such as those in Harrison and
McCabe (1996), Kagel and Roth (2000), Chen and Sonmez (2006), Pais and Pinter (2008),
and Featherstone and Niederle (2009), as well as a number of further theoretical analyses of
various mechanisms, such as those by Niederle and Roth (2003b), Ergin and Sonmez (2006),
Ehlers (2008), Kojima and Pathak (2009), among others.10
In fact, as described in Roth and Peranson (1999) and Roth (2008), there was eventually a
redesign of the NRMP by Roth and Peranson that involved using a variation on the algorithm
with students proposing (as opposed to hospitals), thus making it a dominant strategy for
students to respond truthfully and resulting in their dominant match, with some reliance
on large numbers of hospitals leading to relatively small gains from misrepresentation of
preferences.11,12 The fact that with large numbers of agents, the gains from manipulating the
deferred acceptance algorithm appear minimal was pointed out by Roth and Peranson (1999)
who found that, in large markets with randomly generated preferences, very few agents had
attractive manipulations. This provides insight into why the deferred acceptance algorithm
10

More generally, there is also a question of what information agents have when matching in either cen-

tralized or decentralized settings, and there is still relatively little work on that issue, with a few notable
exceptions, such as Niederle and Yariv (2010).
11
The redesign of the algorithm also dealt with another growing issue: many students in the matching
process also have a partner or spouse in the matching. This leads their preferences to depend on two
matchings not just one, as they may wish to be in the same geographic area. This leads to substantial
complications that the new algorithm cleverly addressed. Although the dominant strategy condition could
no longer be satisfied, the set of manipulations were still minimal.
12

They provided not only an algorithm, but also the software to implement it.

12

worked well for so long. This sort of competitive feature of large matching markets was
subsequently elaborated upon by Immorlica and Mahdian (2005) and Kojima and Pathak
(2009).
It is also worth mentioning another feature of the above analysis. It is assumed in the
above example, and some of the early modeling, that each agent has strict preference rankings. In some applications, it may be that agents are indifferent between some potential
matches. For example, if hospitals are ranking doctors based on grades or some test outcomes, they may be indifferent between doctors who have similar grade point averages, or
scores on the exam. It would seem easy to simply break ties by some randomization. However, it quickly becomes clear that how ties are broken can affect the emerging outcome under
a variety of mechanisms, and one has to exercise care in how ties are handled with respect
to a number of issues regarding incentives and stability. This issue has been investigated in
more detail in the context of one-sided matching, such as the housing market, and I discuss
that more below.
Finally, the above analysis abstracts away from any side payments or wage considerations. Introducing the possibility of transfers into the setting can substantially influence the
analysis, and is relevant in many applications. The first works to introduce side payments
in a nontrivial manner were Shapley and Shubik (1972), Crawford and Knoer (1981), and
Kelso and Crawford (1982). Indeed, in some such settings one can then draw upon results
from general equilibrium and auction theory (e.g., see Demange, Gale and Sotomayor (1986)
and Hatfield and Milgrom (2005)). In fact, in many applications there are some wages (e.g.,
hospitals pay students), and questions as to how responsive those wages are to the matching
and whether they are influenced by the process (e.g., see Niederle and Roth (2003a) and
Bulow and Levin (2006)).

2.2

The Allocation of Indivisible Objects and the Housing Market

In the bipartite matching setting discussed above, both sides of the matching have preferences over each other; for example both the residents and hospitals have rankings over their
potential matches. There are many other matching settings where agents are matched to
objects and the objects do not have preferences. For example, assigning patients to kidneys,
students to public schools, or people to offices, are all settings where agents on one side of
13

the market (patients, students, people) have preferences over the matching, but the objects
on the other side (kidneys, public schools, offices) do not. Although there are some insights
from the two-sided matching setting that translate to this setting, there are new and different
issues that arise both in theory and practice, which I now discuss.
2.2.1

Foundational Theory and Some Applications

In applications of assigning individuals to objects, there are two basic classes of problems
that have been analyzed. In one, the individuals begin with some rights or endowments
of the objects and then can possibly exchange them. In another, individuals might have
some sorts of priorities in making choices or in selecting certain objects, but there are no
pre-assigned endowments.
I begin with the case of pre-assigned endowments of objects, as the theory did, with
what is known as the housing market. In that setting, first discussed by Shapley and
Scarf (1974), each agent begins by owning a house and has preferences over houses. The
idea is to re-allocate the houses to the agents. Agents have strict rankings over the houses,
including their own houses. As with two-sided matching, the idea is to find a matching that
is stable. Here, stability is based on a standard core definition: a matching is core stable if
there does not exist any group of agents who could reallocate their own houses and all be
better off than in the suggested matching. It turns out that there is a unique core allocation
in this environment. In fact, this setting was originally analyzed by Shapley and Scarf in
looking for an application that would be covered by a core existence theorem of Scarfs that
applies to settings without transfers.13 David Gale suggested an algorithm,14 now sometimes
referred to as Gales top trading algorithm for finding the unique core allocation, and it
is shown to do so in Shapley and Scarfs (1974) paper.
The top trading algorithm is quite intuitive and simple. Begin by having each agent
point to the agent who owns his or her most preferred house, which could be him or herself.
If there is a cycle (including the possibility that somebody points to him or herself), assign
all the agents in the cycle the house of the agent whom they have pointed to. Now, we are
left with a subset of agents and houses. Repeat the process on that subset, and continue
until no agents and houses remain. It is relatively simple to see that this algorithm results
13

See the discussion in Scarf (2009).

14

Again, see the discussion in Scarf (2009).

14

in making assignments at each stage and leads to a core stable matching. At the first step,
if any agents point to themselves then they must be assigned their own house in any core
allocation as they could block otherwise, and so they must all be removed. Consider the
remaining agents. They must all be pointing at some other agent/house. There must be at
least one cycle among these agents. To see this, simply follow the pointings starting from
any agent. Given that it is a finite system, following the pointings must eventually result
in a cycle. In the first step, if some set of agents form a cycle, including singletons, then
they must each obtain their most preferred houses in a core-stable matching as otherwise
they can jointly deviate to obtain their most preferred houses (and note that no subset of a
simple cycle can each obtain their most preferred houses without including the full cycle).
Then iterate on this argument.
Table 3: A Housing Example: Preferences of Agents over Houses
A1 A2

A3 A4

H4 H3

H4 H1

H2 H1

H2 H3

H3 H2

H1 H4

H1 H4

H3 H2

Let us illustrate the top trading algorithm with an example of a four-person housing
market, with preferences as pictured in Table 3. So, agent 1 (A1) finds her own house (H1)
least-preferred, and finds agent 4s house (H4) most-preferred. In this case, the top trading
algorithm has A1 point to A4 and A4 point to A1, while A2 points to A3, and A3 points to
A4. We find just one cycle, between A1 and A4, and so we swap A1 and A4s houses, and
then proceed to the next step of the algorithm. At the next step, only two agents remain,
and A2 and A3 each point to each others houses. Thus we swap those, and we end up with
a matching of { (A1,H4), (A2,H3), (A3,H2), (A4,H1) }. As with the marriage market, we
can see why this is an insightful algorithm, by seeing what might happen instead if things
were decentralized and agents simply took improving swaps whenever they found them. For
instance, if agents 1 and 2 bump into each other first, and swap houses (as they each prefer
the others house to their own), then we get to a matching of { (A1,H2), (A2,H1), (A3,H3),
(A4,H4) }. From there, perhaps A2 and A3 notice that they each prefer the house that
15

the other now has to the one that they have, and so they swap, to end up with { (A1,H2),
(A2,H3), (A3,H1), (A4,H4) }. Next, A3 and A4 bump into each other and swap houses,
to get to { (A1,H2), (A2,H3), (A3,H4), (A4,H1) }. This turns out to be a match that is
stable in the sense that if we get to this point, there are no improving swaps for any subset
of agents (in fact three of the agents have their favorite house). However, agent 1 would
have been better off not making the initial trade, but instead waiting to trade with agent
4. Thus, agent 1 could block this process. The top trading cycle finds the only matching
that is stable in the sense of avoiding such blockings. Indeed, it has to be careful in which
assignments it makes at each stage in order to find a stable matching that is not blocked by
any group with their original endowments.
The core, and the associated top-trading algorithm, have a set of nice properties. First
there is a unique matching that is core-stable, which can be seen in the above description
as well as the example, and as was shown by Roth and Postlewaite (1977). Second, it is
strategy-proof as noted by Roth (1982a): it is in each agents best interest to point at their
favorite house at each stage of the algorithm (or, more precisely, it is a dominant strategy for
agents to truthfully reveal their rankings to a central planner who executes the algorithm).
In fact, as shown by Ma (1994) the only strategy-proof, individually rational, and Pareto
efficient manner of matching houses to agents is to choose the unique core allocation for each
preference profile.
While this seems to be a quite stylized and simple setting, it has some direct applications.

Indeed, as shown by Roth, Sonmez, and Unver


(2004) this applies to a setting of kidney
exchange. In that important allocation, there are patients who need kidney transplants.
Kidneys come with all sorts of specific characteristics that lead to much higher probabilities
of survival if the characteristics of the donor and patient are congruent in terms of blood
types and other details. In particular, a patient who needs a transplant might have someone
who has agreed to donate a kidney, but with whom that patient is not such a great match.
If there are many pairs of patients together with donors, then by operating a top trading
cycle, one can efficiently exchange kidneys in such a way that each patient ends up with
the best possible match (that is stable). More generally, agents occupy places in a waiting
list for available kidneys (usually from cadavers), and those may also be included, which
complicates the problem, as then not all agents are endowed with a kidney. There are also
issues that have to do with the enforceability of exchange and the fact that in many cases the

16

operations have to take place contemporaneously. These and other details require nontrivial
extensions of Gales top trading cycle in order to work in practice, and have motivated

additional theoretical investigations (e.g., Roth, Sonmez, and Unver


(2004, 2005, 2007)). It
is also important to note that the theory is influencing practice, as such kidney exchange

programs are now being implemented, partly due to the efforts of Roth, Sonmez, and Unver,

among others, as described in Sonmez and Unver


(2010).
Beyond the two-sided matching problems and the housing market, there are all sorts of
hybrid theoretical problems that are derived from important applications. A very active
area of both application and theory is that of matching students to public schools. All over
the world, there are issues of which students are allowed to attend which public schools,
and things that are taken into account include where students live as well as whether they
have siblings at a given school. Each school may have some capacity constraint, and so
it might not be possible for each student to attend the school that he or she most prefers
(although law requires there be enough supply for each student to attend some school). A
system is needed to decide which students attend which school. Here again, the students
have preferences over the objects (public schools), and the schools are public and can at least
be thought of as not having preferences over the students in some applications. This differs
from the housing market in that students do not begin with any endowment of particular
schools. In this environment, one can view each possible seat at a school as a separate object
to be allocated.
If there were no priorities of which students should be favored at which schools, then
there exists a very natural mechanism. Simply run a lottery, so that a random ordering over
students is chosen, and then each student in turn picks which of the remaining openings he
or she wishes to occupy. This is known as a random serial dictatorship, and it clearly finds a
stable match,15 and the resulting match is also Pareto efficient. Moreover, it is a dominant
strategy for each student to pick his or her most preferred opening when given the choice,
and so it is a dominant strategy for them to submit their true rankings to whomever might
be running the algorithm.16
15

Stability here refers to the fact that no students would want to switch their assignments after the fact, as

long as students have strict rankings over schools (and there are school preferences as there are no priorities).
16
Even with these nice properties, such a mechanism can fail to be efficient from an ex-ante perspective,
as pointed out by Bogomolnaia and Moulin (2001), who introduced another set of algorithms, including a
probabilistic serial mechanism, in which agents can be thought of as getting various amounts of probabil-

17

Many applications, however, come with some restrictions or other sorts of priorities in
terms of which students should be assigned to which schools or should be allowed to have
a seat in a given school before some other student. For example, students may have taken
exams and have some priorities based on their scores (e.g., see Balinski and Sonmez (1999)),
or they may have siblings at a school or have other geographic priorities (e.g., see Abdulkadiroglu and Sonmez (2003)). A key insight from Balinski and Sonmez (1999) and Abdulkadiroglu and Sonmez (2003) is that one can leverage some of the earlier work on two-sided
matching to address these problems. The idea is that these priorities can be represented as if
the schools had some preferences over the students (although those preferences now have to
be interpreted differently from a welfare perspective). The priorities are often rough, in that
many students have similar priorities. Consequently, there is much indifference in schools
artificial preferences that has to be treated somewhat carefully, for when two students
have an equal claim to a given scarce seat not everyone can be satisfied.
These issues have come to the forefront in practice because of difficulties experienced by
various school districts using certain methods. For instance, some become evident in what
has become known as the Boston mechanism. That mechanism begins with a strict priority
listing of students determined for each school, taking into account which students live close
to a school or already have a sibling at a school, etc., with ties broken by a randomization.
Then students submit their rankings over the schools to the system. In the first step of the
matching, students are assigned to their top choice school in order of the priority until the
seats at a school are exhausted. Some students may not get their top choice, in particular if
more students list a given school as their top choice than there are openings at that school,
then students are rationed by the priority and some students are left unassigned. Also, some
schools may end up still having openings if fewer students listed them as a top choice than
there are openings in the school. Next, repeat this procedure with the remaining seats and
unassigned students, with students now naming their second choice, and so forth.
The Boston mechanism is a well-defined system, but it has some undesirable incentive
properties that have led to many complaints. By truthfully listing a top choice, a student
with a low priority at that top choice may end up not getting that choice, and also having
ity which they then spend in a clever eating procedure on various schools. Such mechanisms fail to have
the dominant strategy properties that the random serial dictatorships have, but end up satisfying stronger
ordinal efficiency properties and still satisfying some nice incentive properties.

18

other schools fill up in the first step, and then finally ending up with a very bad match.17
As such, students have some incentives to game the system by listing schools that they have
high priorities at as their top choices even if they are less than good fits for the students, for
fear of ending up with something even worse.
Difficulties with such mechanisms were causing increasing problems in major public school
systems like those of New York and Boston, which were matching large numbers of students
to schools each year. The Gale and Shapley (1962) student-optimal version of the deferred
acceptance algorithm, offered a nice alternative as it has nicer incentive properties, as well
as other stability and envy properties (as it respects students priorities). Atila Abdulkadiroglu, Parag Pathak, and Al Roth helped in designing a new matching system based on
a variation of the deferred acceptance algorithm (see Abdulkadiroglu, Pathak, and Roth
(2005)). In parallel, Abdulkadiroglu, Pathak, and Roth had also joined forces with Tayfun
Sonmez who was helping advise the Boston school system in a similar redesign (see Abdulkadiroglu, Pathak, Roth, and Sonmez (2005)),18 and many other school systems have also been
following suit.
Although the deferred acceptance algorithm has nice incentive properties, it fails to be
Pareto efficient: there can be situations where all students would be better off with an
alternative matching. In the two-sided matching problem, Pareto efficiency came as a by
product of stability which actually implies a form of core stability. However, in this setting,
if one no longer views the schools as part of the welfare equation and instead focuses only
on the students, there are situations where all students could be made better off. This
observation opens the door to alternatives to the deferred acceptance algorithm as by giving
17

A critical distinction between this and the deferred acceptance algorithm is that first engagements in

the deferred acceptance algorithm are not binding. In the Boston mechanism, the first assignments are final.
Thus, students in the Boston mechanism have to worry about being rationed at the first stage, since while
they were failing to get their top choice it could be that their second choice (where they might, for instance,
have a high priority) was also filled. In the deferred acceptance algorithm, students can still propose to
their second choice and have it be accepted, even if it was tentatively engaged. Thus, under the Boston
mechanism students have to worry about what other students are doing.
18
These redesigns each began in 2003, with the New York system approaching Al Roth for aid and the
Boston redesign prompted by an article in the Boston Globe about the difficulties that Abdulkadiroglu and
S
onmez (2003) had pointed out with the Boston mechanism. The New York changes were quickly approved
(partly due to its bankrupt situation at the time), while the Boston system took several years to clear
community discussions.

19

up some incentive properties and an envy-free property that is also satisfied one can end up
with alternative methods that in some cases offer welfare improvements. Comparisons across
mechanisms are then more nuanced, which results in some substantial design challenges that
have driven further investigations and discoveries. Indeed, there are many issues that these
systems raise about randomization, tie breaking, incentives to reveal preferences truthfully,
envy and welfare, that have resulted in interesting theoretical work, experiments, empirical
studies, and new designs of systems for matching students to schools (see the recent survey

by Sonmez and Unver


(2011)).

Auctions

The auctions literature differs from the matching literature discussed above in that objects
are exchanged for some transferable good that in practice is generally money.19 This has led
to a different development of the literature, but again one closely driven by application.
There are three major strands of the auctions literature. One concerns the most abstract
question of understanding the limitations of how to exchange goods among, or allocate
specific objects to, a set of individuals who have private information about the values of
those objects. The second is a more positive strand of developing theoretical predictions
about the workings of specific and observed auction systems. The third is the actual design
of auctions in practice for a variety of applications ranging from the sale of treasury bonds
to the allocation of the rights to broadcast in certain ranges of public radio spectra. From
the outset, the theory of auctions, and the more general mechanism design questions that
emerged alongside it, have been closely motivated by specific practical questions and the
analysis of existing institutions.
19

There are connections and overlaps between the literature, as transfers can be used in matching mar-

kets, and auctions can be viewed assignment or matching problems. For example, Hatfield and Milgrom
(2005) offer a model that nests parts of each of these literatures and helps view connections between them.
Nonetheless, for the most part the literatures have progressed independently with separate applications in
mind.

20

3.1
3.1.1

Early Auction Theory


Specific Auctions and Revenue Equivalence

The seminal paper in the theoretical literature is Vickreys (1961) article, in which he provides
some of the first formal analyses of a series of observed auction formats. In particular, there
are four main auction formats for a single indivisible object that Vickrey discussed:
a free form open outcry auction (sometimes referred to as an English auction), where
bids ascend until no bidder is willing to offer a higher bid, and the winner is the last
to have bid who then pays that price,
a second-price auction, where bidders submit bids (say in sealed envelopes) and then
the highest bidder wins but pays the second highest bid,
a Dutch auction where the price begins at a high level is decreased over time until
some bidder accepts the price and wins and pays that price,
a first-price auction, where bidders submit bids in sealed envelopes and the highest
bidder wins and pays his or her bid.
Vickreys (1961) analysis of these auctions was in the context of a model with symmetric
independent private values. That is, each bidder in an auction has some value for the object
that is known to that bidder, with the notation vi representing the value that bidder i is
willing to pay (say in money terms) for the object in question. Each bidder knows his or
her own valuation, but is potentially uncertain about the valuations of the other bidders.
Bidder i views the possible valuations of other bidders as random variables, and each bidders
valuation is independently and identically distributed according to some commonly known
distribution.
Vickreys analysis included some fairly straightforward observations, and then some
deeper ones. The relatively straightforward observations are that the ascending bid (English) auction and the second-price auction are equivalent in the sense that the equilibrium
(and dominant) strategies are to bid ones true valuation in the second-price auction and
to stay in until the price reaches ones valuation in the English auction.20 Similarly, the
20

Even though this is relatively straightforward in terms of a formal economic analysis, subjects in labo-

ratory experiments more easily find their dominant strategies in the context of an English auction than in
the context of a second-price auction, as found by Kagel, Harstad, and Levin (1987).

21

Dutch auction and first-price auction have a sort of strategic equivalence, and the equilibrium strategies are to bid some fraction of ones valuation, but the same fraction in both of
these latter auctions. Thus, there are senses in which the English auction and second-price
auctions are equivalent, and the Dutch and first-price auctions are equivalent.
The other, and much more subtle observations that emerged from Vickreys analysis of
these auctions were as follows. In each of these auctions the object would always end up in
the hands of a bidder who valued it most (so the outcome was efficient), and the expected
payment made by the winning bidder was the same across the four auction formats. This
latter conclusion is termed revenue equivalence and was shown in the context of a specific
(uniform) distribution of values in Vickrey (1961), and then for a more general class of
distributions in Vickrey (1962).
The case solved in Vickrey (1961) is for a uniform distribution of types, say on [0,1]. To
see the intuition behind the result, let us consider the equilibria for these different auction
formats. First, consider an easy version of English auction to analyze: the price steadily rises
over time and bidders make a choice of when to drop out of the auction and the price stops
when there is just one bidder left. It is clearly a dominant strategy to stay in the auction
until the price hits ones value, but not once the price exceeds ones value. Thus, the bidder
with the highest valuation wins the auction, and the price ends up being the second highest
value. Similarly, one can check that in a second-price auction, it is a dominant strategy for
each bidder to submit his or her true valuation, as then a bidder wins whenever the highest
bid of the others (and hence the price he or she pays) is below his or her value, and does
not win (and would prefer not to) whenever there is some other bid above his or her value.
The first-price auction (and Dutch auction) are slightly more involved to analyze, as there
are no longer dominant strategies, and the optimal bidding strategy of one bidder depends
on what the other bidders are doing. Clearly a bidder will wish to bid below his or her
value, but how much below is the question. Lowering the bid lowers the price one pays if
one wins, but lowers the probability of winning. The precise optimal bid then balances these
considerations. In the context of a uniform distribution, the equilibrium ends up being that
bidders bid vi

n1
n

, where n is the total number of bidders. So, with just two bidders, each

bids half of his or her value. As the auction becomes more competitive, the bids become
greater fractions of the bidders valuations. When one looks at the expected maximum value
of vi across bidders and then multiplies it times

22

n1
n

, that turns out to be the expected value

of the second highest valuation for the second-price and English auctions. This equivalence
between the expected winning price across all of the auctions was somewhat surprising, and
is part of a more general phenomenon explained below.
3.1.2

More General Auction Formats and Mechanisms

Although it was not not fully recognized at the time, Vickreys work was also an important first step in the analysis of general mechanisms for allocating goods and services with
transfers, with part of his analysis forming a basis for what are now referred to as VCG
mechanisms (Vickrey, Clarke, Groves mechanisms).
Vickreys recognition that the revenue of the four prominent auction formats listed above
are equivalent in terms of the expected payments received by the seller turns out to be true
more broadly. As established independently by Harris and Raviv (1981), Myerson (1981), and
Riley and Samuelson (1981), it extends to any of a set of mechanisms such that: a bidder
with the lowest possible valuation makes no expected payment; bidders are risk neutral,
bidders valuations are private and independently and identically distributed according to an
atomless and continuous distribution, and the bidder with the highest valuation is awarded
the object. Myersons work made explicit use of the revelation principle (from Myerson
(1979)) and provided a useful framework for the mechanism design literature, while some
of the same ideas were more implicit in the other analyses. A main insight in these works
was that one could view a bidders problem as buying probabilities of getting the object
together with corresponding expected payments. Viewed in this light, one can then ask
which mechanism maximizes the revenue of the seller (the optimal auction problem), and
also see that incentive conditions impose requirements on the expected payments. One can
also see that if two mechanisms lead to the same probabilities of getting objects as a function
of types, and have some boundary conditions on payments, then they must lead to the same
expected payments as a function of type. This turns out to provide the general insight
behind the earlier revenue equivalence results.
Without providing too much detail, let me sketch the basic ideas in the proof of revenue
equivalence since it illustrates some of the ideas behind the techniques. I loosely follow a
version of the argument from Milgrom and Weber (1982). Consider a bidder with valuation
v, and suppose that the bidder through different behaviors can obtain various probabilities
p of winning the object. Let e(p) be the (lowest) expected payment that corresponds to

23

winning with probability p, and suppose for our convenience (the proof works without this)
that e is differentiable. The bidders expected payoff is vpe(p) and so a necessary condition
for the bidder to be maximizing her payoff in terms of choosing a probability of winning the
object is that v = e0 (p). So, let us denote the optimal choice of p as a function of v by p(v),
which then satisfies the equation v = e0 (p(v)). Now we write the expected payment of some
particular type vb as
e(p(vb))

= e(p(0)) +

Z b
v

[d(e(p(v))/dv]dv.

Note that d(e(p(v))/dv is e0 (p(v))p0 (v), which from our earlier necessary condition reduces
to vp0 (v). So, we can write
e(p(vb)) = e(p(0)) +

Z b
v

vp0 (v)dv.

The key observation from this is that the expected payment of a bidder with valuation v
now depends only on the expected payment at v = 0 and the function p0 , which depends
only on the probability of winning as a function of type. So any two mechanisms for which
these are identical will lead to the same expected payments for all types. A special case is
where the bidder with the highest value wins, which then precisely ties down p0 (v).21
With this sort of formulation in hand, one can then view auctions together with their
equilibria in the abstract, as collections of (pi (vi ), ei (pi (vi ))) across agents that satisfy incentive compatibility conditions (so that the bidder of type vi prefers the probability of winning
and expected payment associated with her true vi to that associated with pretending to have
some other value) and that maximize the sellers overall revenue. The further results that
emerge from Harris and Raviv (1981), Myerson (1981), and Riley and Samuelson (1981)
show that in some important cases, the auction formats that maximize the sellers revenue
correspond to the standard auction format given above together with a carefully selected
reserve price (minimum bid) or participation fees.
3.1.3

Moving Beyond Private Values

Vickreys formulation, and much of the literature that followed, worked in the private values framework, where each bidder knew exactly his or her value for the object.22 While
21

This sketch has worked as if things are nicely differentiable. Nonetheless, it is not too difficult to show

that the functions in question satisfy some monotonicity conditions, which together with the continuous
distribution on bidders types, allow one to extend the argument without full differentiability.
22

For a survey of some of that literature, see Englebrecth-Wiggans (1980).

24

private values capture the essence of some settings, many if not most settings involve more
complicated interdependencies in valuations.23 To take a classic example, if bidders are oil
companies bidding on the right to extract oil from a given tract of land that is owned by a
government, how much oil the land holds is a common piece of uncertainty to each of the
bidders, and affects their valuations. They may each hire experts to provide some estimates
based on seismic tests, etc., which then leads to some imperfect estimate on the part of each
bidder as to the value of the oil that is held by the land.
An important first work in this direction is by Wilson (1967, 1969) who analyzed pure
common values auctions - that is, situations where the ex post value to the bidders is
identical, but uncertain at the time of bidding. In such settings a winners curse applies in
the context of a first-price auction: if bidders simply bid their expected values for the object,
then the winner will be the one with the highest estimation. If estimates have independent
errors in them, then the winner is likely to have over-estimated the value of the object.24
Thus, the equilibrium involves bidding below ones estimate in a manner that takes into
account what one learns about the other bidders information by conditioning on winning.
Formulating the game carefully, and deriving an equilibrium in such settings was a significant
advance in the literature.25
Wilson (1969) studied the case of two bidders and then Ortega-Reichert (1968) provided
generalizations of Wilsons results in an important (unpublished) dissertation that also made
strides in other directions as well. The common values model was further studied in various forms by Rothkopf (1969), Wilson (1977), Milgrom (1979, 1981), Englebrecht-Wiggans,
Milgrom, Weber (1983), and Maskin and Riley (2000b), among others.
One of the most important contributions in this line is Milgrom and Webers (1982) paper
which substantially expanded the analysis of auctions beyond the independent private values
and pure common values settings. It illustrated the power of a natural set of assumptions
about the private information of the bidders. In addition, it also synthesized and generalized
23

Finding purely private value settings is more difficult than one might imagine. Even settings like the sale

of paintings (see the discussion in Milgrom and Weber (1982)), where individual bidders might have private
tastes for a painting, will have some possibility of resale for the future, which introduces some common
aspect to the valuations of current bidders.
24
Empirical evidence of a winners curse effect in the context of bidding for the rights offshore oil was
found by Capen, Clapp, and Campbell (1971). A clever investigation of such effects and bidding strategies
appears in Hendricks, Porter, and Boudreau (1987).
25

Wilson (1969) is one of the first significant uses of Harsanyis (1967,68) concept of Bayesian equilibrium.

25

many earlier results, and provided some basic insights that are important benchmarks for
much of the more recent auction analysis.
Milgrom and Webers (1982) formulation maintained the risk-neutrality of the bidders,
but substantially generalized the informational setting. They envisioned a setting where
there are some underlying unobserved state variables (e.g., the amount of oil in the ground),
also some privately observed variables (each bidders information about the oil, and perhaps
other things such as a bidders estimate of his or her cost of extracting the oil, and so forth).
The key assumptions are that this full vector of states and private information enters each
bidders utility function in a non-decreasing way (i.e., variables are coded so that increasing
any entry weakly increases a given bidders payoff), that there is a symmetry in the way
in which bidders preferences depend on the information, and that the random state and
private information variables are all affiliated. Affiliation is a property emerging from the
statistics literature26 that requires that higher values of any particular variable correspond
to higher conditional distributions of values of the other variables. So, for instance, a higher
estimate of the value of one bidders signal about the value of the oil in a tract corresponds
with a higher actual value and higher signals of others, in a precise probabilistic sense.27
Pure common values can be seen as a limit of private values where the correlation in values
hits one, but the affiliation setting of Milgrom and Weber is substantially more general
than simply varying the correlation in values, as it allows for both common and private
components and complex interrelationships, as well as signals that individuals observe that
are imperfect predictors of underlying state variables.
Milgrom and Weber (1982) then derived equilibrium conditions for bidding behavior
under various auction formats, showing that revenue equivalence no longer holds in such an
environment. In fact, the English and second-price auctions are no longer equivalent (as a
bidder learns more about other bidders information as they drop out in an English auction
than one can condition upon in a sealed bid second-price auction). The Dutch and first-price
26

See the discussion of multivariate total positivity (also known in various forms as MTP2 ) by Karlin and

Rinott (1980), and its roots in what is known as the FKG inequality of lattices by Fortuin, Kasteleyn, and
Ginibre (1971).
27
The precise definition for the case where the joint distribution of the variables x is described by a density
function f (as is the case in Milgrom and Weber (1982)) is that f (x x0 )f (x x0 ) f (x)f (x0 ) for all vectors
x and x0 , where x x0 and x x0 are the component-wise maximum of x and x0 and minimum of x and x0 ,
respectively.

26

auctions are equivalent, since the winning bidder does not know anything other than the fact
that the other bidders bids are lower. More precisely, in this setting the English auction
leads to (weakly) more revenue for a seller than the second-price auction, which in turn leads
to weakly more revenue for a seller than a first-price auction, with strict differences in some
cases with dependent information. Part of the intuition is that there is less of a winners
curse effect in an English auction where a bidder can deduce more about the value of the
object by observing other bidders behaviors, and similarly, the conditioning of paying the
second highest bid provides slightly more information than simply paying ones own bid.
Beyond the first use of affiliation as a basis for studying information in auctions, and the
derivation of equilibrium characterizations and relative revenue rankings in such settings,
Milgrom and Weber (1982) also derived what is known as the linkage principle. That principle refers to the following insight. The more closely a given bidders information tracks other
bidders signals and values of the underlying variables (the more it is linked to other pertinent information), the greater the difference there will be in that bidders bid as a function of
his or her information. Effectively, the bidders bidding function becomes a steeper function
of the bidders information, which may roughly be thought of as bidding more aggressively as
there is less residual uncertainty about underlying values and potential bids of other agents,
leading to more competitive pressures and less concern of winners curse issues. As bidders
bidding functions become steeper, they lead to (weakly) more revenue. This principle can
then be used to derive some of the revenue rankings above, and also to deduce that a seller
following a general policy of revealing his or her information will enjoy higher profits than
one who generally hides pertinent information.
3.1.4

Asymmetries and Other Issues

As was recognized early on, and in fact discussed by Vickrey (1961) the elegant and powerful
results that can be obtained in settings with symmetric distributions of information among
bidders do not extend easily to cases with asymmetries. Not only does the problem become
substantially more complicated, but in fact many of the conclusions of the symmetric analysis
fail to hold. Even in private values settings, the highest-valued bidder might not win the
auction and there might not even exist an equilibrium. Work on this, beyond Vickreys
examples, includes Maskin and Riley (2000) and Marshall, Meurer, Richard and Stromquist

27

(1994).28
In addition, the analysis referred to above generally focuses on the case of risk neutral
bidders, while in many contexts it is natural for bidders to be averse to risk. Risk aversion
also affects the results. Most fundamentally, revenue equivalence fails to hold.29 Various
rankings of auction formats can be established under risk aversion, as noted by Holt (1980),
Milgrom and Weber (1982), and Matthews (1987), among others.30

3.2

Practical and Applied Auction Design

The early auctions literature had a healthy dose of the influence of practice on the theory.
Indeed, it is seen in Vickreys original (1961) work in the formulation of the standard auction
formats, and the usage of various auction formats. More broadly, the many studies (theoretical, experimental, and empirical) on reserve prices and entry fees, participation, and
the winners curse, all exemplify the close relationship between theory and practice that the
auctions literature has always had. Again, as pointed out in the introduction, this comes
from the very clearly delineated rules that underlie auctions and the natural fit with a game
theoretic analysis that they afford. Important examples of this interface include the debate
28

Existence of equilibria in such settings becomes substantially more difficult to establish. Generally,

auctions are discontinuous games since by adjusting ones strategy slightly, one can change from winning to
losing. Thus, without having some explicit construction of equilibria, existence is a challenge to establish.
This applies a fortiori in asymmetric settings where explicit derivation of equilibria is rare, and where in
some cases equilibria even fail to exist (e.g., see Jackson (2009)). Various techniques have been successful in
some specific environments, by taking advantage of monotonicity properties (e.g., Athey (2001), McAdams
(2003), Reny and Zamir (2003), Reny (2010)) or other properties of best response correspondences (Reny
(1999)), or by looking at limits of finite approximations (Jackson and Swinkels (2005)).
29
Even with risk neutrality and private values, revenue equivalence depends on specific assumptions about
the numbers of objects and bidders and informational structure and fails to hold if those are violated (see
Jackson and Kremer (2006)).
30
There are many other important questions that were addressed in the literature that I do not discuss
here, including: auctions with large numbers of bidders (e.g., Swinkels (2001), Bali and Jackson (2002)),
multiple identical objects for sale (e.g., Pesendorfer and Swinkels (1995,1997), Jackson and Kremer (2004,
2006)) , uncertain numbers of bidders (e.g., Harstadt, Kagel, Levin (1990)), entry decisions and information acquisition (e.g., McAfee and McMillan (1987), Levin and Smith (1994), Bergemann and Valimaki
(2002)), budget constraints (e.g., Che and Gale (1996)), double auctions (e.g., Satterthwaite and Williams
(1989), Rustichini, Satterthwaite, Williams (1994)), collusion in auctions (discussed below), other forms of
interdependent valuations, and additional efficiency concerns.

28

about the auction method used to sell treasury securities (e.g., see Back and Zender (1993),
Simon (1994), Nyborg and Sundaresan (1996), and Ausubel and Cramton (1998)).
Still, perhaps the largest jolt that the auctions literature got from application came with
a series of auctions of radio broadcast spectra by various governments in the 1990s. These
auctions involved theorists both in the design of the auctions and the advising of bidders.
Moreover, it brought to the table a whole series of questions that researchers had been aware
of, but had not studied in much detail, at least not in concert. As a prime example, the
1994 FCC spectrum auctions in the US were for many different licenses to use a part of the
broadcast spectrum in various geographic locations.31 Bidders might have different values
for different geographic locations, and bidders generally had preferences over combinations
of licenses that exhibited strong forms of complementarities (e.g., a phone provider would
want to be able to cover large areas and so would desire to purchase combinations of licenses
in order to be able to offer their customers contiguous service). There were asymmetries
amongst bidders on many dimensions, including ability to pay for licenses, and there were
questions about potential collusion among bidders as well as basic issues about how many
bidders would actually participate. Beyond this, basic computational issues became more
prominent: with many licenses one cannot expect bidders to communicate bids on all subsets of potential objects. Should the auctions for various licenses be run simultaneously or
sequentially? How should the prices and various winners be tied together across licenses,
if at all? The combination of so many departures from the theory at one time meant that
existing theory provided some basic insights, but there were many more unanswered than
answered questions in terms of how to design such auctions and what to expect from them.
This energized the literature, which had calmed a bit since its first pinnacle in the early
1980s. Moreover, the auctions yielded much more revenue than the governments had initially
anticipated, and led to extensive press coverage and additional scrambles for various governments to auction off rights to provide a variety of services. In fact, much was learned through
the subsequent design of various auction formats and the results that emerged as they were
used. Moreover, the auctions have been developed in close consultation with economists,
throughout the design phase, experimental testing phase, and implementing phase. Several
main features have been emphasized in the subsequent literature. Some important issues
31

Various accounts of these spectrum auctions and others appear in the literature, including McAfee and

McMillan (1996), Cramton (1997), Milgrom (2000, 2004), Wilson (2002), Klemperer (2002, 2004).

29

are that of collusion among bidders and encouraging participation, both of which became
quite evident in some of the spectrum auctions, as discussed by Klemperer (2002) (see also
Binmore and Klemperer (2002)). Collusion (e.g., Graham and Marshall (1987), Hendricks
and Porter (1989), McAfee and McMillan (1992)) had been discussed to some extent in the
earlier literature, and has received enhanced attention since the FCC auctions (e.g., Cramton and Schwartz (2000), Klemperer (2002), Che and Kim (2006)). Another very important
issue was how to take care of the evident complementarities32 and allow for bidding on packages of licenses or objects, rather than just having simultaneous ascending price auctions, as
discussed by Milgrom (2001, 2004). This was a key issue in the original FCC auction, which
led to a simultaneous ascending bid design and inspired a great deal of debate as to how
to end the bidding license by license, or in combination. There were various suggestions
including ones by Milgrom and Wilson and by McAfee, and there were experimental tests of
alternative designs that were piloted by Ledyard, Plott, Porter and Rangel.33 This has also
led to important theoretical work on the design of various forms of combinatorial auctions
(e.g., see Ausubel and Cramton (1996), Ausubel and Milgrom (2002), Ausubel (2004), Milgrom (2007)).34 The designing of new auctions continues to present challenges: a current
buy-back auction, involves not only selling off rights to new owners, but also buying them
from old owners. The combinatorics of those auctions are leading to new synergies between
game theory, complexity analysis, and algorithmic design.

Conclusion

The direct interaction between theory and practice in matching markets and auctions is
healthy, and largely unparalleled. As mentioned before, the well-defined nature of the institutions in matching and auction applications provides a strong interface with game theory.
The impact of economic analysis in these arenas has been broad and deep, from improving
32

See Ausubel, Cramton, McAfee, and McMillan (1997) for an investigation of the synergies between

licenses.
33
The foreword to Milgrom (2004) provides an account of some of the work on that design. See also
Cramton (1995) and McAfee and McMillan (1996).
34
There are a series of associated problems in designing combinatorial auctions that have to do with the
computational complexity and limitations of how many combinations of objects can be bid on simultaneously,
which has led to productive interfaces between computer science and economics. For example see Cramton,
Shoham and Steinberg (2006), and Nisan, Roughgarden, Tardos, and Vazirani (2007).

30

things like the allocation of kidneys for transplant, to the auctioning of the airwaves.35 This
has rightfully generated much enthusiasm in the research community, and vibrant growth in
these areas, as well as a healthy dose of economics in the field..

References
Abdulkadiroglu, A. and Sonmez, T. (1999), House Allocation with Existing Tenants,
Journal of Economic Theory, vol. 88, pp. 233-260.
Abdulkadiroglu, A. and Sonmez, T. (2003), School Choice: A Mechanism Design Approach, American Economic Review, vol. 93 (3), pp. 729-747.
Abdulkadiroglu, A., Pathak, P. A., and Roth, A. E. (2005), The New York City High
School Match, American Economic Review, Papers and Proceedings, vol. 95 (2) May,
pp. 364-367.
Abdulkadiroglu, A. Pathak, P. A., Roth, A. E. and Sonmez, T. (2005), The Boston
Public School Match, American Economic Review Papers & Proceedings, vol. 95 (2),
pp. 368-371.
Ariely, D., Ockenfels, A. and Roth, A.E. (2005), An Experimental Analysis of Ending
Rules in Internet Auctions, Rand Journal of Economics, 36 (4), 891 - 908.
Athey, S. (2001) Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information, Econometrica, 69, 861 890.
Ausubel, L.M. (2004) An Efficient Ascending Bid Auction for Multiple Objects,
American Economic Review, 1452-1475.
Ausubel, L. M. and Cramton, P. (2002) Demand Reduction and Inefficiency in MultiUnit Auctions, .
Ausubel, L.M., P. Cramton, P. McAfee and J. McMillan (1997) Synergies in Wireless
Telephony: Evidence from the Broadband PCS Auction. Journal of Economics and
Management Strategy, 6, 497527.

35

I have not even mentioned other areas of market design, such as electricity markets, Internet markets, and

various financial markets, that have also been areas of important interfaces between theory and application.

31

Back, K. and J.F. Zender (1993) Auctions of divisible goods: On the rationale for
the Treasury experiment, Review of Financial Studies 6 , 733 764.
Bali, V. and M.O. Jackson (2002) Asymptotic Revenue Equivalence in Auctions,
Journal of Economic Theory, 106, 161 - 176.
Balinski, M. and T. Sonmez (1999) A Tale of Two Mechanisms: Student Placement,
Journal of Economic Theory, 84, 73 - 94.
Banerjee, S. H. Konishi and T. Sonmez (2001) Core in a Simple Coalition Formation
Game, Social Choice Welfare 18, 135 153
Bergemann, D. and Valimaki, J. (2002) Information Acquisition and Efficient Mechanism Design, Econometrica, 70:3, 1007 - 1033.
Binmore, K. and Paul D. Klemperer (2003) The Biggest Auction Ever: The Sale of
the British 3G Telecom Licences, Economic Journal,
Blair, C. (1988) The Lattice Structure of the Set of Stable Matchings with Multiple
Partners, Mathematics of Operations Research, 13:4, 619 - 628
Bogomolnaia, A. and M.O. Jackson (2002) The Stability of Hedonic Coalition Structures, Games and Economic Behavior, 38, 201 - 230
Bogomolnaia, A. and H. Moulin (2001) A New Solution to the Random Assignment
Problem, Journal of Economic Theory, 100, 295 - 328.
Bulow, J. and Levin, J. (2006) Matching and Price Competition. American Economic
Review, 96 (3), 652 - 668.
Capen, E. C., R. B. Clapp, and W. M. Campbell, (1971) Competitive Bidding in
High Risk Situations, Journal of Petroleum Technology, 23, 641 - 653.
Che, Y.-K. and D. Gale (1996) Expected revenue of all-pay auctions and first-price
sealed-bid auctions with budget constraints, Economics Letters, 50, 373 - 379.
Che, Y.-K. and J. Kim (2006) Robustly Collusion-Proof Implementation, Econometrica, 74, 1063-1107.

32

Chen, Y. and Sonmez, T. (2006) School Choice: An Experimental Study, Journal of


Economic Theory, 127, 2002 - 2231.
Clarke,E.H.(1971)MultipartPricingofPublicGoods.PublicChoice. 11(1),17 - 33.

Compte,O.andP. Jehiel(2000)TheWait-and-SeeOptioninAscending PriceAuctions,JournaloftheE


Cramton, P. (1997) The FCC spectrum auctions: An early assessment, Journal of
Economics & Management Strategy, 6 (3), 431-495.
Cramton, P., Gibbons, R. and Klemperer, P. D. (1987) Dissolving a Partnership
Efficiently, Econometrica, 55, 615 - 632.
Cramton, P. and J.A. Schwartz (2000) Collusive Bidding: Lessons from the FCC Spectrum Auctions. Journal of Regulatory Economics, 17,2952.
Cramton, P., Shoham, Y., and Steinberg, R. (editors) (2006) Combinatorial Auctions,
MIT Press.
Crawford, V.P. and E.M. Knoer (1981) Job matching with heterogeneous firms and
workers, Econometrica, 49:2, 437-450.
Cremer, J. and R. McLean, Full extraction of the surplus in Bayesian and dominant
strategy auctions, Econometrica 56 (1988), 12471257.
Dasgupta, P., and E. Maskin (1986) The Existence of Equilibrium in Discontinuous
Economic Games I: Theory, Review of Economic Studies, 53, 1 26
Demange, G., D. Gale, M. Sotomayor (1986) Multi-item auctions, The Journal of
Political Economy,
Dreze, J. and J. Greenberg (1980) Hedonic Coalitions: Optimality and Stability,
Econometrica 48 , 987 1003.
Dubins, L.E., and D.A. Freedman (1981) Machiavelli and the Gale-Shapley Algorithm, American Mathematical Monthly, 88(7): 48594.
Echenique, F. and J. Oviedo, (2006) A theory of stability in many-to-many matching
markets, Theoretical Economics, 1, 233 - 273.
33

Ehlers, L. (2008) Truncation Strategies in Matching Markets, Mathematics of Operations Research, 33(2), 327
Englebrecht-Wiggans, R. (1980) Auctions and Bidding Models: A Survey, Management Science, 26, 119 - 142.
Engelbrecht-Wiggans, R., P. Milgrom, and R. Weber (1983) Competitive Bidding and
Proprietary Information, Journal of Mathematical Economics, 11, 161 - 169.
Ergin, H. and Sonmez, T. (2006), Games of School Choice under the Boston Mechanism, Journal of Public Economics, vol. 90, pp. 215-237, January.
Featherstone, C., and M. Niederle (2008) Ex Ante Efficiency in School Choice Mechanisms: An Experimental Investigation, NBER Working Paper No. 14618.
Fortuin, C. M.; Kasteleyn, P. W.; Ginibre, J. (1971) Correlation inequalities on some
partially ordered sets, Communications in Mathematical Physics, 22, 89 - 103
Gale, D., and Shapley, L.S. (1962), College Admissions and the Stability of Marriage,
American Mathematical Monthly, vol. 69, pp. 9-15.
Graham, D. and R. Marshall (1987) Collusive Bidder Behavior in Single-Object
Second-Price and English Auctions, Journal of Political Economy, 95, 1217-1239.
Groves,T.(1973) IncentivesinTeams,Econometrica 41,617 - 631.
Harris, M.,and A. Raviv (1981) Allocation Mechanisms and the Design of Auctions,
Econometrica, 49, 1477 - 1499.
Harrison, G., and K. McCabe (1996) Stability and preference distortion in resource
matching: an experimental study of the marriage problem, Research in Experimental
Economics, 6: 53 - 129.
Harsanyi, J. C. (1967-68) Games with Incomplete Information Played by Bayesian
Players, Management Science, 14, 159-189, 320-334, 486-502.
Harstadt, R., J. Kagel, D. Levin (1990) Equilibrium bid functions for auctions with
an uncertain number of bidders, Economics Letters 33, 35 - 40.

34

Hatfield, J. and S. Kominers, (2011) Multilateral Matching, mimeo.


Hatfield, J. and P. Milgrom (2005) Matching with Contracts, The American Economic Review, 95:4, 913-935.

Hendricks, K. and R. Porter (1989) Collusion in Auctions, Annales dEconomie


et
de Statistique, 15/16, 217-230.
Hendricks, K., R. Porter, and B. Boudreau (1987) Information, Returns and Bidding
Behavior in OCS Auctions: 1954-1969, Journal of Industrial Economics, 35, 517 542.
Holt, C. (1980) Competitive Bidding for contracts under Alternative Auction Procedures, Journal of Political Economy, 88, 433 - 445.
Immorlica, N., and M. Mahdian (2005) Marriage, Honesty, and Stability, In the
Proceedings of the Sixteenth Annual ACM -SIAM Symposium on Discrete Algorithms,
Philadelphia: Society for Industrial and Applied Mathematics, 5362.
Jackson, M.O. (2009) Non-existence of equilibrium in Vickrey, second-price, and English auctions, Review of Economic Design, 13, 137 - 145
Jackson, M. O., and I. Kremer (2004) The Relationship between the Allocation of
Goods and a Sellers Revenue, Journal of Mathematical Economics, 40, 371 - 392.
Jackson, M. O., and I. Kremer (2006) The Relevance of a Choice of Auction Format
in a Competitive Environment, Review of Economic Studies, 73, 961 - 981.
Jackson, M. O., and J. M. Swinkels (2005) Existence of Equilibrium in Single and
Double Private Value Auctions, Econometrica, 73, 93 139.
Kagel, J.H., Harstad, R.M. and Levin, D. (1987) Information impact and allocation
rules in auctions with affiliated private values: a laboratory study. Econometrica, 55
, 1275 - 304
Kagel, J. H. and Levin, D. (2002) Common Value Auctions and the Winners Curse,
Princeton University Press.

35

Kagel, J. H. and Roth, A. E. (2000) The dynamics of reorganization in matching markets: a laboratory experiment motivated by a natural experiment. Quarterly Journal
of Economics, vol. 115 (1), pp. 201-235.
Karlin, S, and Y. Rinott (1980) Classes of Orderings of Measures and Related Correlation Inequalities. I. Multivariate Totally Positive Distributions, Journal of Multivariate Analysis, 10, 467 - 498.
Kelso, A. S., and Crawford, V. P. (1982) Job Matching, Coalition Formation, and
Gross Substitutes. Econometrica, vol. 50 (6), pp. 1483-1504.
Klemperer, P. D. (1999) Auction Theory: A Guide to the Literature, Journal of
Economic Surveys, 13.
Klemperer, P. D. (2000) The Economic Theory of Auctions, Elgar: U.K.
Klemperer, P. D. (2002) What Really Matters in Auction Design Journal of Economic
Perspectives, 16, 169 - 189.
Klemperer, P. (2004) Auctions: Theory and Practice, The Toulouse Lectures in Economics, Princeton University Press.
Knuth, D.E. (1976) Marriage Stables, Les Presses de lUniversite de Montreal.
Kojima, F. and P.A. Pathak (2009) Incentives and stability in large two-sided matching markets, The American Economic Review, 99(3): 608 - 627
Krishna, V. (2003) Asymmetric English Auctions, Journal of Economic Theory, 112,
261 288.
Krishna, V. (2010) Auction Theory, Academic Press.
Levin, D. and Smith, J.L. (1994) Equilibrium in Auctions with Entry The American
Economic Review, 84:3, 585 - 599.
Li, H., and Rosen, S. (1998), Unraveling in Matching Markets. American Economic
Review, 88, 371 87.

36

Ma, J. (1994) Strategy-Proofness and the Strict Core in a Market with Indivisibilities, International Journal of Game Theory, 23, 75 - 83
Marshall, R., M. Meurer, J. Richard, and W. Stromquiest (1994) Numerical Analiysis
of Asymmetric First Price Auctions, Games and Economic Behavior, 7, 193 - 220.
Maskin, E. S. and Riley, J. G. (1984) Optimal Auctions with Risk Averse Buyers,
Econometrica, 52, 1473 - 1518.
Maskin, E. S. and Riley, J. G. (2000) Asymmetric Auctions, Review of Economic
Studies, 67, 413 - 438.
Maskin, E. S. and Riley, J. G. (2000b) Equilibrium in Sealed High Bid Auctions,
Review of Economic Studies, 67, 439 452.
Matthews, S. A. (1983) Selling to Risk Averse Buyers with Unobservable Tastes,
Journal of Economic Theory, 3, 370 - 400.
Matthews, S. A. (1987) Comparing Auctions for Risk-Averse Buyers: A Buyers Point
of View, Econometrica, 55, 633 - 46
McAdams, D. (2003) Isotone Equilibria in Games of Incomplete Information, Econometrica, 71, 11911214.
McAfee, R. P. and McMillan, J. (1987) Auctions and Bidding. Journal of Economic
Literature, 25, 699 - 738.
McAfee, R. P and McMillan, J. (1992) Bidding Rings, American Economic Review,
82, 579 - 599.
McAfee, R. P. and McMillan, J. (1994) Selling Spectrum Rights, Journal of Economic
Perspectives, 8, 145 - 162.
McAfee, R. P and McMillan, J. (1996) Analyzing the Airwaves Auction, Journal of
Economic Perspectives, 10, 159 - 175.
McAfee, R.P. and P. Reny (1992) Correlated information and mechanism design,
Econometrica 60, 395 422.

37

McAfee, R. P. and Vincent, D. (1993) The Declining Price Anomaly, Journal of


Economic Theory, 60, 191 - 212.
McKinney, C. N., Niederle, M., and Roth, A. E. (2005) The collapse of a medical
labor clearinghouse (and why such failures are rare), American Economic Review, 95
(3), 878 - 889.
McMillan,J.(1994) Selling Spectrum Rights,Journal of Economic Perspectives, 8,145
- 162.
Milgrom, P. (1979) A convergence theorem for competitive bidding with differential
information, Econometrica, 47 , 670 688.
Milgrom, P. (1981) Rational expectations, information acquisition, and competitive
bidding, Econometrica, 49 , 921 944.
Milgrom, P. (2000) Putting Auction Theory to Work: The Simultaneous Ascending
Auction, Journal of Political Economy, 108, 245 - 272.
Milgrom, P. (2004) Putting Auction Theory to Work, Cambridge University Press.
Milgrom, P. (2007) Package Auctions and Package Exchanges, Econometrica, 75:4,
935 - 966.
Milgrom, P. and R. Weber (1982) A theory of auctions and competitive bidding, Econometrica 50 , 1089 1122.
Moulin, H. (1995) Cooperative Microeconomics, Princeton University Press.
Myerson, R. (1979) Incentive compatibility and the bargaining problem, Econometrica, 47 , 61 74.
Myerson, R. (1981) Optimal auction design, Mathematics of Operations Research,
6 , 58 73.
Niederle, M., and Roth, A. E. (2003a) Relationship Between Wages and Presence of
a Match in Medical Fellowships, Journal of the American Medical Association, 290
(9), 1153-1154.

38

Niederle, M., and Roth, A. E. (2003b) Unraveling reduces mobility in a labor market:
Gastroenterology with and without a centralized match, Journal of Political Economy,
111 (6), 1342-1352.
Niderle, M, Proctor, D.D. and Roth A.E. (2008) The Gastroenterology Fellowship
MatchThe First Two Years Gastroenterology 135:2, 344-346
Niederle, M., and L. Yariv (2010) Decentralized Matching with Aligned Preferences,
mimeo.
Nisan, N., T. Roughgarden, E. Tardos, and V. Vazirani (editors) (2007) Algorithmic
Game Theory, Cambridge University Press.
Nyborg, K. and S. Sundaresan (1996) Discriminatory versus Uniform Treasury Auctions: Evidence from When-Issued Transactions. Journal of Financial Economics, 42,
63104.
Ortega-Reichert, A. (1968) Models for Competitive Bidding Under Uncertainty,
Ph.D. Thesis, Department of Operations Research Technical Report No. 8, Stanford
University, 1968.
Pais, J., and A. Pinter (2008) School choice and information: an experimental study
on matching mechanisms, Games and Economic Behavior, 64(1), 303 - 328.
Palfrey, T. R. (1983) Bundling Decisions by a Multiproduct Monopolist with Incomplete Information, Econometrica, 51, 463 - 484.
Pesendorfer, W. and J. Swinkels (1997) The Losers Curse and Information Aggregation in Common Value Auctions, Econometrica, 65, 12471282.
Pesendorfer, W. and J. Swinkels (2000) Efficiency and Information Aggregation in
Auctions, American Economic Review, 90, 499-525
Plott, C. R. (1997) Laboratory Experimental Testbeds: Application to the PCS Auction. Journal of Economics & Management Strategy, vol. 6 (3), Fall, pp. 605-638.
Porter, R. H. (1995) The Role of Information in U.S. Offshore Oil and Gas Lease
Auctions, Econometrica, 63, 1 - 27.
39

Reny, P. J. (1999) On the Existence of Pure and Mixed Strategy Nash Equilibria in
Discontinuous Games, Econometrica, 67, 1029 1056.
Reny, P. J. (2010) On the Existence of Monotone Pure Strategy Equilibria in Bayesian
Games, forthcoming in Econometrica.
Reny, P.J. and S. Zamir (2003) On the Existence of Pure Strategy Monotone Equilibria
in Asymmetric First-Price Auctions, Econometrica, 72, 1105 1126.
Riley, J. and W. Samuelson (1981) Optimal auctions, American Economic Review, 71
, 381392.
Roth, A. E. (1982a) The Economics of Matching: Stability and Incentives, Mathematics of Operations Research, 7, 617-628.
Roth, A. E. (1982b) Incentive Compatibility in a Market with Indivisible Goods,
Economics Letters, 9, 127-132.
Roth, A. E. (1984) The evolution of the labor market for medical interns and residents:
A case study in game theory. Journal of Political Economy, 92, 991-1016.
Roth, A.E. (1985) The College Admissions Problem is not Equivalent to the Marriage
Problem, Journal of Economic Theory, 36, 277-288.
Roth, A.E. (1989) Two-Sided Matching with Incomplete Information about Others
Preferences, Games and Economic Behavior, 1, 191-209.
Roth, A.E. (1990) New Physicians: A Natural Experiment in Market Organization,
Science, 250, 1524-1528.
Roth, A.E. (1991) A Natural Experiment in the Organization of Entry Level Labor
Markets: Regional Markets for New Physicians and Surgeons in the U.K., American
Economic Review, 81, 415-440.
Roth, A. E. (2008) What have we learned from market design? Economic Journal,
118 , 285 310.

40

Roth, A. E. and Peranson, E. (1999) The Redesign of the Matching Market for American Physicians: Some Engineering Aspects of Economic Design. American Economic
Review, vol. 89 (4), pp. 748-779.
Roth, A.E. and A. Postlewaite (1977) Weak versus Strong Domination in a Market
with Indivisible Goods, Journal of Mathematical Economics, 4, 131 - 137.
Roth, A.E., and U.G. Rothblum (1999) Truncation strategies in matching marketsin search of advice for participants, Econometrica, 67(1): 21 - 43.

Roth, A. E., Sonmez, T., and Unver,


M. U.(2004) Kidney Exchange, Quarterly
Journal of Economics, 119 (2), 457 - 488.

Roth, A. E., Sonmez, T., and Unver,


M. U. (2005) Pairwise Kidney Exchange,
Journal of Economic Theory, 125 (2), 151 - 188.

Roth, A. E., Sonmez, T., and Unver,


M. U.(2007) Efficient Kidney Exchange: Coincidence of Wants in Markets with Compatibility-Based Preferences, American Economic
Review, 97 (3), 828 - 851.

Roth, A. E., Sonmez, T., Unver,


M. U., Delmonico, F. L., and Saidman, S. L.(2006)
Utilizing List Exchange and Undirected Good Samaritan Donation through Chain
Paired Kidney Donations, American Journal of Transplantation, 6 (11), 2694-2705.
Roth, A. E. and Sotomayor, M. (1990) Two-Sided Matching: A Study in GameTheoretic Modeling and Analysis, Econometric Society Monograph Series, Cambridge
University Press.
Roth, A. E. and J.H. Vande Vate (1990) Random paths to stability in two-sided
matching, Econometrica 58, 1475 - 1480.
Rothkopf, M. H. (1969) A Model of Rational Competitive Bidding, Management
Science, 15, 362 - 373.
Rustichini, A., Satterthwaite, M. A. and Williams, S. R. (1994) Convergence to Efficiency in a Simple Market with Incomplete Information, Econometrica, 62, 1041 1063.

41

Satterthwaite, M. A. and Williams, S. R. (1989a) The Rate of Convergence to Efficiency in the Buyers Bid Double Auction as the Market Becomes Large, Review of
Economic Studies, 56, 477 - 498.
Scarf, H. (2009) My introduction to top-trading cycles, Games and Economic Behavior, 66, 630 631
Shapley, L. and M. Shubik (1972) The Assignment Game I: The Core. International
Journal of Game Theory, 1, 111 - 130.
Shapley, L. S. and Scarf, H., (1974) On Cores and Indivisibility, Journal of Mathematical Economics, 1, 23 - 28.
Simon, D.P. (1994) Markups, quantity risk, and bidding strategies at Treasury coupon
auctions, Journal of Financial Economics 35, 43 62.
Sonmez, T. (1997) Manipulation via Capacities in Two-Sided Matching Markets,
Journal of Economic Theory, 77, 1, 197 - 204.
Sonmez, T. (1999) Strategy-proofness and Essentially Single-valued Cores, 67, 3, 677
- 689

Sonmez, T. and U. Unver


(2011) Matching, Allocation, and Exchange of Discrete
Resources, in the The Handbook of Social Economics, edited by J. Benhabib, A.
Bisin, and M.O. Jackson, North Holland Press.
Swinkels, J. (2001) Efficiency of large private value auctions, Econometrica, 69, 3768.
Varian,H.R.(2007) Position Auctions,International Journal of Industrial Organization, 25,1163 - 1178.
Vickrey, W. (1961) Counterspeculation, Auctions, and Competitive Sealed-Tenders,
Journal of Finance, 16, 8 - 37.
Vickrey, W. (1962) Auctions and Bidding Games, Recent Advances in Game Theory,
Princeton University, 15 - 27.
Wilson, R.B. (1967) Competitive Bidding with Asymmetrical Information, Management Science, 13:11, 816 - 820.
42

Wilson, R.B. (1969) Competitive bidding with disparate information, Management


Science, 446 - 448.
Wilson, R.B. (1977) A bidding model of perfect competition, Review of Economic
Studies, 44 , 511 518
Wilson, R. B. (2002) Architecture of Power Markets, Econometrica, 70 1299 - 1340.

43

You might also like