You are on page 1of 46

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/48264482

Game Theory

Book · February 2001


DOI: 10.1007/978-1-4757-3212-2 · Source: OAI

CITATIONS READS
12 8,426

2 authors, including:

Walter Trockel
Bielefeld University
114 PUBLICATIONS   512 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Nash program View project

TU games revisited View project

All content following this page was uploaded by Walter Trockel on 20 August 2014.

The user has requested enhancement of the downloaded file.


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1 GAME THEORY
2
3 Joachim Rosenmüller and Walter Trockel
4 University of Bielefeld, Institute of Mathematical Economics IMW, D-33615 Bielefeld,
5 Germany
6
7 Keywords: allocation, apportionment, Aumann economy, axiomatic approach,
8 balanced games, bargaining solution, characters, coalitional form, convex games,
9 cooperative game, coordination game, core, cost sharing, directed games, equivalence
10 principle, exchange economy, extensive form, evolutionary stability, fictitious play,
11 game form, homogeneous games, implementation, incentive compatible, Kalai-
12 Smorodinsky solution, knowledge, LP-game, Maschler-Perles solution, mechanism,
13 modiclus, Nash equilibrium, Nash solution, normal form, nucleolus, outcome function,
14 payoff function, reduced games, repeated games, revelation principle, Shapley value,
15 simple games, social choice rule, TU-game, vNM-Stable Set, voting games, Walrasian
16 equilibrium.
17
18 Contents
19
20 1. Introduction
21 2. Foundations of Non-cooperative Game Theory
22 3. NTU-Games
23 4. TU-Games
24 5. The Equivalence Principle
25 6. Mechanism Theory
26 7. Repeated Games
27 8. Evolution and Learning in Games
28 9. Experimental Games
29 10. Concluding Remarks
30 Glossary
31 Bibliography
32 Biographical Sketch
33
34 Summary
35
36 Game Theory describes human interaction involving conflict, cooperation and
37 competition, the term Interpersonal Decision Theory is synonymous. The term reflects
38 the fact that most essential features of this field are manifested in parlor games. This
39 topic-level treatment covers large parts of the basic concepts and methods and sketches
40 some field of recent applications. The simultaneous occurrence of strategic, stochastic
41 and dynamic phenomena, the fundamental role of epistemic aspects like knowledge and
42 information and the impact of institutional and organizational structures make game
43 theoretic analysis a highly complex task.
44
45 In order to deal with various facts of social interaction different forms of strategic or
46 cooperative game models have been developed. The Normal (or Strategic) Form
47 describes the strategic alternatives and the Extensive Form reflects the evolvement of
48 games in time as governed by players’ successive decisions during play. In particular,

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

49 Repeated Games with Incomplete Information describe iterated plays of the same
50 randomly influenced game about which the players receive asymmetric information.
51 The Coalitional Form describes power of coalitions.
52
53 Equilibria and solutions represent various approaches to solve games or to describe
54 stable, fair, expected or just likely to payoffs of games.
55
56 In mechanism design an imperfectly informed planner with limited enforcement power
57 creates rules of a game that ensure that any potential population of players by playing
58 an equilibrium according to those rules ends up with a socially desired state.
59
60 The Equivalence Principle deals with an important application of game theory to large
61 economies, where due to the dominating power of competition distinct solution
62 concepts asymptotically coincide with the Walrasian equilibria.
63
64 Recent applications of game theory to evolutionary biology in evolutionary models of
65 social systems and of learning are also briefly sketched.
66
67 Finally, results from game theoretic analysis based on perfectly rational players are
68 contrasted with laboratory experiments that have been performed with real, hence at
69 best boundedly rational, players.
70
71 A brief assessment of game theory as a part of Operations Research (or vice versa)
72 concludes.
73
74 1. Introduction
75
76 Game Theory is a mathematical theory of socio-economic phenomena exhibiting
77 human interaction, i.e., conflict and cooperation between decision making individuals
78 (the players). The theory is based on the structural procedures of mathematics and
79 directed towards problems in various fields of applications.
80
81 An appropriate synonym is “Multipersonal Decision Theory”. The main paradigms are
82 those of strategic behavior, incomplete information, and mutual anticipation of actions,
83 bargaining power, fairness and equity.
84
85 Game Theory approaches the problem of decisions for a group of individuals under
86 uncertainty; it deals with lack of information about the state of the environment, the
87 state of the interpersonal decision process and the state of the opponent’s incentives and
88 abilities. Hence, a probabilistic context is inevitable. The states of nature as well as the
89 strategic behavior of the players involved are generally thought to be randomly
90 influenced.
91
92 In addition, the mutual anticipation of opponent’s strategic behavior, the mutual
93 knowledge (See knowledge) about the opponent’s knowledge and the recursive
94 influence of such kind of consideration on the state of knowledge as well as the
95 resulting strategic consequences are modeled; again they are thought to be randomly
96 influenced. This way an idea of “common knowledge” enters the scene.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

97
98 Also, Game Theory focuses on aspects of cooperation, enforced by legal contract or by
99 long standing experience. It treats problems of fair distribution of resources, acceptable
100 outcomes to joint operations, the representation of bargaining power and coalitional
101 influence, the a priori expectation of gains to be achieved from cooperative decisions.
102 The power of coalitions and the resulting influence of individuals, principles of
103 bargaining and axiomatic treatment of solutions, complaints and threats, efficiency and
104 effectiveness, reputation and learning are being discussed on a formal level.
105
106 The performance in strategic or cooperative situations (in “the game”) requires an
107 incentive. A version of utility theory is underlying most game theoretical models. This
108 implies that the individuals involved (the players) are capable of expressing preferences
109 with regard to the decisions at stake. Thus, it is required that, for each player, there is
110 preference ordering or a utility function defined on the set of decisions available to all
111 players.
112
113 Given a player’s incentives, he may have incomplete (and randomly influence)
114 information about the incentives, preferences, or utilities of his opponents. Indeed,
115 Game Theory is capable of describing situations in which players are uncertain about
116 the game they are playing and the opponents they are facing.
117
118 Game Theory is also concerned with clarifying the notion of rational behavior. It does
119 not explicitly so, but the concept appears implicitly formulated in various attempts to
120 find a “solution” of a game. Solutions more or less imply that the players achieve
121 benefits by acting rationally on the basis that everyone else behaves rationally as well.
122
123 Game Theory basically uses the language of Mathematics; it embraces the analysis of
124 structural relations due to mathematical thinking. Models are formulated in precise
125 definitions, theorems are stated and proved. The mathematical techniques vary through
126 a great range; they involve linear algebra and analysis, measure theory, probability and
127 statistics, stochastic processes and potential theory, partial differential equations,
128 functional analysis, combinatorics, graph theory, optimization and more.
129
130 The main fields of application can be found in economics. However, sociology, political
131 sciences, psychology, industrial organization, management science, biology, warfare
132 etc. are all open to the formulation and formal treatment via games.
133
134 Within these various fields Game Theory is set to, the formal mathematical treatment
135 contains various degrees of rigor; descriptions of games may be purely verbal and
136 strategic behavior may be treated in a less rigorous framework. Model builders have a
137 tendency to more or less incorporate the methods and the language of their respective
138 field. In this context Game Theory changes its appearance. Economists tend to a version
139 that resembles their way of thinking in the tradition of ideology of certain schools,
140 biologist use the language of evolutionary theory etc. In such a context, mathematical
141 rigor is sacrificed against greater adherence to the methods and dogmas of the particular
142 field.
143

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

144 Historically, Game Theory developed along various different lines of thought, most of
145 them rather disjoint. Mathematicians (in particular the French school LAPLACE, DE
146 MOIVRE, PASCAL) considered the probabilistic aspects of the casino. DANIEL
147 BERNOULLI (1738) (motivated by JEAN and NICOLAS BERNOULLI) considered
148 the St. Petersburg problem; he discussed not only the probabilistic intricacies but also
149 came up with an early version of utility theory. This line was continued by LOUIS
150 BACHELIER (1901), who also created the first version of Brownian motion
151 representing the stock markets fluctuation. EMILE BOREL contributed greatly to put
152 probability theory into its present shape based on measure and integration. (1925-1939).
153 But he was surpassed by JOHN VON NEUMANN when he unsuccessfully tried to
154 solve the Min-Max Problem (1921).
155
156 The early economists COURNOT (1838) and BERTRAND (1889) discussed oligopoly
157 and developed a notion of strategy. BERTRAND also treated the game of baccarat.
158 This line was continued by EDGEWORTH (1881), ZEUTHEN (1930) and
159 STACKELBERG (1952). EDGEWORTH in addition started a line of discussion
160 leading to the cooperative approach.
161
162 At about 1713 (the same time that JEAN AND NICOLAS BERNOULLI report to him
163 the St. Petersburg Problem) DE MONTMORT was also in connection with J. (THE
164 EARL OF) WALDEGRAVE who analyzed a 2-person card game. Here probabilities
165 occur rudimentary reflecting strategic behavior.
166
167 Warfare appears in context with strategic thinking. CLAUSEWITZ discusses the battle
168 field coolly from the strategic viewpoint. At the beginning of the 20th century some
169 English engineers developed simple evader-pursuer models which resemble differential
170 games between airplanes.
171
172 The decades between 1920 and 1940 reflect the final attempt to view the Theory of
173 Games as a comprehensive field. Von Neumann’s proof was based on fixed point
174 theorems which in the mid-thirties where particularly developed by BANACH,
175 MAZUR, ULAM, ERDÖS, STEINHAUS, KURATOWSKI. VILLE was the first to
176 provide a proof based on a separation theorem.
177
178 OSKAR MORGENSTERN met VON NEUMANN when both men had to leave Europe
179 in the late 1930s. They laid the foundation of the field of Game Theory with their
180 seminal volume: The Theory of Games and Economic Behaviour, in which they stressed
181 the similarity between strategic and cooperative behavior in the economic context as
182 well as in parlor games. Random influence was considered to be inevitable.
183
184 Due to these authors three versions of “the game” emerge. Games appear in normal
185 form (strategic form), in extensive form, and in coalitional form (see Foundations of
186 Non-cooperative Game Theory). The first two are close relatives; they constitute the
187 basic paradigm of Non-cooperative Game Theory. The coalitional form is the basic
188 paradigm of Cooperative Game Theory.
189
190 The normal form consists of a complete list of possible strategic alternatives for each
191 player. This way each player is assigned a strategy space. In addition, a payoff function

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

192 is specified for each player. Thus, any simultaneous and independent choice of
193 strategies (one be each player) results in a payoff (a real number, a utility, a money
194 term) to each player.
195
196 The normal form is also referred to as the strategic form, in view of the fact that it
197 provides an overview over the strategic options available to a player.
198
199 It is one of the main tasks of the model builder to recognize “The Game”. Given the
200 data of a multipersonal decision problem of a possibly foggy and unclear nature, one
201 has to specify a normal form game which contains the essential features and is “close”
202 to reality.
203
204 For the normal form the basic “solution concept” is equilibrium. This is a strategic
205 situation (an n -tuple of strategies) with dominant stability properties. An equilibrium
206 may reflect versions of “rational behavior” and in some case may be identified with
207 “optimal strategies”. In most games, however (as in the real world), there is no “optimal
208 behavior”, equilibria may (or may not) exist in abundance and result in gains of greatly
209 varying utility to the players.
210
211 The extensive form was originally conceived to explain the “rules of the game” (VON
212 NEUMANN MORGENSTERN). Preferably one might think of a time-structured (and
213 stochastically influenced) process that is subject to repeated actions of the players.
214 Intermediate and final payoffs (or costs) are awarded to the players. Decisions at an
215 early state should, therefore, be regarded with respect to the present reward and with
216 respect to the future consequences. The process as well as its history may not be fully
217 observable. Players receive private information concerning the state of the process and
218 the choice of actions of the opponents. Strategic behavior is to be defined according to
219 observations and the development of the process. This way, the extensive form results
220 in a normal form and is subject to the analysis theorem. Then equilibrium can be
221 recognized. The extensive form may provide the basic environment, time structure and
222 “rules of the game” but the normal form provides the solution concept
223
224 Turning to the cooperative or coalitional form, we find that the notion of strategy is no
225 more predominant. Rather it is the possibility of contracts and cooperation which is
226 preeminent. Binding agreements are thought to be possible and enforceable. Thus, the
227 power of coalitions and their influence on the results of a bargaining process is the
228 central topic. A cooperative game is essentially a mapping assigning achievable utilities
229 to coalitions. The task here is to make inference from the power of coalitions to the
230 potential of the individuals. If we now the game, what will be the resulting possibilities,
231 options, expectations, gains to the players?
232
233 The “solution concept” of Cooperative Game Theory is the idea of Stable Solution.
234 While adherent to some idea of equilibrium, the cooperative version of stability is much
235 more static. Stability of the result of bargaining and cooperation, fairness and equity, the
236 returns expected from cooperation, the consequences of an argumentative process, and a
237 final distribution of utility to the players achieved by agreement-these ideas are central
238 to the coalitional form.
239

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

240 Thus, the balance of non-cooperative versus cooperative theory is made precise by
241 discussing strategic behavior and equilibrium strategies versus the power of coalitions
242 and stable solutions.
243
244 In the detailed discussion, however, it turns out that the borderline is blurred. There is
245 non-cooperative imitation of cooperation: the stabilizing forces of reputation and
246 punishment that appear in repeated games tend to exhibit elements of cooperation; the
247 agency enforcing contracts can be replaces by the pressure of mutual punishment
248 sustaining equilibrium. On the other hand, cooperative theory incorporates elements of
249 strategic behavior. If uncertainty prevails about the opponent’s motivation, their
250 preferences and the game one (thinks one) is involved, the mechanisms enter the scene.
251 These are devices representing agreements dependent on private observations or
252 knowledge of the players. As these observations cannot be verified independently,
253 players may start to behave strategically with respect to the revelation of their
254 information or their strategies. This sets the individuals involved in a non-cooperative
255 game after the contract had been agreed upon.
256
257 Some game theorists hold that cooperative theory is not an “independent” topic, in a
258 sense all cooperation should be explained as resulting from strategic behavior. This
259 view may be extended to a position opposed to cooperative theory at all. Another view,
260 however, is that the idea of the “game” is something Platonic: the paradigm of human
261 competitive and cooperative interaction in the presence of incentives and mutual
262 dependence. Various shapes of this idea materialize, some of them in a precise and
263 mathematically rigorous form.
264
265 2. Foundations of Non-cooperative Game Theory
266
267 2.1 The Normal Form
268
269 The following formal definitions are meant to explain the fundamental topics of Non-
270 cooperative Game Theory.
271
272 A non-cooperative n -person game in normal form is a 2n -tuple
273
274 Γ = (S1 ,..., S n ; F 1 ,..., F n ) , (1)
275
276 with the following ingredients. S i (i = 1, ... , n) denotes the set off strategies of player i .
277 This is a complete list of decisions available to the player; at this stage the details of
278 strategic behavior cannot be distinguished. Each F i is a real valued function defined on
279 the Cartesian product S := S1 × ... × S n of the strategy spaces. F i denotes the payoff to
280 player i , depending on the strategies chosen by all players. The choice of a strategy n -
281 tuple is made simultaneously and independently. When preparing his choice each player
282 is not aware of the opponent’s intentions. However, communication may take place in
283 advance; a discussion of the merits and demerits of strategy n -tuples may well precede
284 the actual choice of strategies.
285

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

286 A Nash equilibrium (see Foundations of Non-cooperative Theory) is an n -tuple


287 s ∈ S such that deviating is not profitable for a player provided his opponents stick to
288 their choice. Formally:
289
290 F i ( s ) ≥ F i ( s 1 ,..., s i ,..., s n ), ( s i ∈ S i ; i = 1,..., n) . (2)
291
292 A Priori nothing is said about the establishment of an equilibrium; however, the
293 inherent stability of an equilibrium situation may prevent a player from leaving it. The
294 existence of equilibria requires a basic set of mathematical assumptions, generally the
295 strategy spaces should be (contained in) topological vector spaces and the payoff
296 functions should be quasi-concave and continuous. The standard procedure is to
297 construct the best reply correspondence which is a mapping assigning to each n − 1 -
298 tuple of a player’s opponents the set of maximizers of his payoff. A fixed point theorem
299 (KAKUTANI, KY FAN) provides a Nash equilibrium. The one first to establish the
300 concept was JOHN F. NASH.
301
302 If these conditions fail to apply (e.g., if the strategy sets are finite), then the game may
303 be extended in various ways. The mixed extension (see Foundations of Non-
304 cooperative Game Theory) randomizes the strategic choice of strategies. Assume that,
305 on each strategy set S i , there is defined a σ -algebra Pi of measurable sets. The
306 probabilities on Pi are called mixed strategies. This way player i now chooses a random
307 mechanism which generates his original “pure strategies” (the elements of S i )
308
309 Given an n -tuple of mixed strategies, the product measure, say σ = σ1 ¼...¼σn reflects
310 the (stochastically) independent choices of strategies. The expectation
311 F i (σ) = ∫ F i ( s )σ(ds ) is used to reflect the payoff to player i at this n -tuple of mixed
312 strategies. Now, if M i denotes the set of mixed strategies of player i , then we have
313 defined a non-cooperative n -person game in the sense of (1); this is
314
315 Γ = (M 1 ,..., M n ; F 1 ,...F n ) , (3)
316
317 the mixed extension of Γ .
318
319 With a suitable structure on the strategy spaces, there is a topology on the mixed
320 strategy spaces (the w∗ -topology) such that the functions F i are continuous and (multi)
321 –linear with respect to the mixed strategies. This way the above existence theorems can
322 be employed to establish equilibrium in mixed strategies.
323
324 Nash equilibrium in mixed strategies can be reinterpreted as follows: for any player i ,
325 the n − 1 -tuple of mixed strategies of his opponents are regarded as his beliefs
326 concerning the behavior of his opponents. A Nash equilibrium constitutes consistent
327 beliefs of the players concerning their randomized choice of strategies.
328

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

329 The correlated extension is obtained by introducing a random experiment (a probability


330 space) resulting in private information of the players (i.e., there are subfields of
331 observable events for the players). A correlating strategy for a player is a random
332 variable, measurable with respect to his observable events and resulting in strategies. If
333 each player chooses such a correlating strategy, the expected payoff for all players
334 (from the composition of the correlating strategies and the payoff functions of the
335 original game) is well defined; hence we have a new normal form, the correlated
336 extension. A correlated equilibrium is a Nash equilibrium of the correlated extension.
337 Actually the mixed extension can be embedded into the correlated one and, for many
338 purposes, suffices to treat the relevant strategic aspects.
339
340 There is a host of applications of this model. It is used in oligopolistic competition and
341 other descriptions of price setting mechanisms, statistics, market entry problems,
342 evolutionary biology, for auctions, principle agent problems, inspection problems,
343 insurance contracts, job assignment problems, traffic regulation, etc. in many cases the
344 application of mixed strategies proves to be most successful (see Foundations of Non-
345 cooperative Game Theory).
346
347 2.2 The Extensive Form
348
349 The extensive form (see Foundations of Non-cooperative Game Theory) is a dynamic
350 process admitting of control by n players. Stochastic influence is inherent in the system
351 as well as in the strategic behavior of the players. Tree games in which the process
352 moves along the edges of a graph are the favorite model in the literature. At each node a
353 player chooses the subsequent edge. There may be the possibility of imperfect or
354 incomplete information: While the process moves the players may not be aware of the
355 state of the process and they may even not be aware of the utilities and strategic
356 possibilities of their opponents, for short, their opponents “types” (HARSANYI). We
357 demonstrate the extensive form by a general Markovian dynamic game written
358
359 ∑ : = ( X, Y; K , K ; Q, f , u, T ) (4)
360
361 Here, T is the horizon or duration of the process. X and Y are the state and action
362 spaces respectively. Each of them is time-structure, i.e. X = X0 × ... × XT etc. Q is a
363 family of stochastic transition kernels governing the law of motion. K again is a family
364 of stochastic kernels generating signals (in K )which can be observed by the players
365 while the process moves. f is a family of intermediate payoffs and u is a family of
366 terminal payoffs. Assume that there is a path ( x, y ) of temporal development in the state
367 space X × Y . Then the evaluation for player i is written
368
T
369 C i ( x, y ) = u i ( xT ) + ∑ fti ( xt −1 , yt ) (5)
t =1
370
371 Now, if ( X , Y ) is a stochastic process moving in the state space then Coi ( X , Y ) is a
372 random variable the expectation of which evaluates the process. Now, the distribution of
373 the process is governed by the strategic behavior of the players as follows.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

374
375 Behavioral strategies are families of kernels, say Ait , which reflect the random choice
376 of an action by player i at each instant t , depending on the observable past (i.e. the
377 stream of observable data of the process). The composition of the behavioral strategies
378 A and the law of transition reflected by Q generates transition kernels (Markovian or
379 with a memory) on the state space. Given an initial distribution μ on the initial states
380 X0 there is, therefore, a measure (Markovian or reflecting memory) on the paths of the
381 state space X × Y , call it mμA . This distribution reflects the stochastic influence of
382 behavioral strategies on the motion of the controlled process. Therefore, we consider the
383 payoff to player i resulting from A to be
384
385 CAiμ = EC i o ( X , Y ) = E m A C i = ∫ C i ( x, y )dmμA ( x, y ) (6)
μ

386
387 Now, C•iμ is a function on the product space of behavioral strategies, say
388 A = A1 × ... × A n . This way we have constructed the normal form generated by ∑ (and
389 μ ) which is
390
391 FΣ,μ = ( A1 ,..., A n ; C•iμ ,..., C•nμ (7)
392
393 Now the whole apparatus of Nash equilibrium analysis may be employed.
394
395 2.3 Strategies, Equilibria, Refinements
396
397 Strategic behavior of players can also be modeled by pure strategies (the choice at each
398 stage is deterministic) or mixed strategies (probabilities or “mixtures” over the pure
399 ones) see Section 2 : Foundations of Non-cooperative Game Theory). Then different
400 normal forms occur. Behavioral strategies are appropriate in a wide range of games.
401 This is due to KUHN’S THEOREM which states that (with perfect recall i.e., consistent
402 memory) behavioral strategies generate the same distributions as mixed strategies.
403
404 Perfect recall may be violated if the formal structure of memory represented for a player
405 is in some sense inconsistent. An inconsistent memory structure is the topic of a new
406 branch information based Game Theory (the absent minded driver (RUBINSTEIN)):
407 What can one say about strategic behavior if players (or automata) forget systematically
408 essential details of the past? Or if they are in a wider sense non-rational (governed by
409 bounded rationally)?
410
411 The temporal structure of stochastic games or tree games permits refinements of the
412 equilibrium concept. Nash equilibria are subgame perfect (SELTEN), if the equilibrium
413 property prevails in every truncated tail game (in the finite context they are obtained by
414 backwards induction). However, tail games or subgames are well defined only with
415 complete or perfect information. With imperfect information the construction of a
416 posteriori probabilities for the state of the process may be conditioned on information
417 which, at equilibrium, have zero probability. This is solved by various versions of

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

418 perturbing the game so as to enable Bayesian posteriors to be computed. Sequential


419 equilibrium (KREPS-WILSON) therefore consists of pairs of strategies and “beliefs”.
420 Perfect equilibria (the first and basic concept due to SELTEN) is a close relative. Other
421 versions (e.g., stable equilibria due to KOHLBERG-MERTENS) enhance the scene.
422 There is a peculiar connection between this kind of stability concept, equilibrium
423 selection, and the shape of the equilibrium correspondence manifold of the normal form
424 generated by Σ .
425
426 The information structure of a game has intensively influenced research. HARSANYI
427 pointed out that incomplete information (uncertainty about the type of other players) and
428 imperfect information (uncertainty about the state of the process) are more or less the
429 same. MERTENS-ZAMIR constructed the appropriate belief spaces such that an
430 infinite number of hierarchies about knowledge of each others knowledge etc. can be
431 formally constructed.
432
433 AUMANN established the idea of common knowledge (see knowledge) which
434 formalizes mutual knowledge about mutual knowledge again with infinite hierarchies. It
435 is important that, when a game is being played, all players are informed about the game,
436 all players are informed about the fact that all players are informed about the game, all
437 players… etc. as this topic extends into epistemological questions (partially of
438 philosophical nature) aspects of formal logic become more and more important for
439 Game Theory. The construction of belief spaces, hierarchy of beliefs, learning and
440 knowledge about learning are fascinating topics of the theory.
441
442 The mathematical intricacies of stochastic games in the most general sense are also
443 enhanced by the information problem. Stochastic games in the proper sense were
444 originated by SHAPLEY (complete information). The question of the existence and
445 shape of equilibria in the general stochastic game with incomplete information is still an
446 intriguing matter. Repeated games with incomplete information constitute the topic
447 treated most extensively. The details can be found in Section 7.
448
449 3. NTU-Games
450
451 3.1 The Coalitional Function
452
453 Within the framework of cooperative Game Theory the notion of strategy loses
454 importance. As the players are capable of binding agreements, they may commit
455 themselves to certain actions which result in joint utilities for coalitions. These actions
456 may well be strategies in an underlying non-cooperative game (an idea basically favored
457 by VON NEUMANN-MORGENSTERN). But as the agreement or contract can be
458 enforced by some supervising agency, there is less room for strategic behavior.
459
460 A contract between members of a coalition includes a specific agreement concerning the
461 distribution of utility which results from the joint venture. Usually there will be a great
462 deal of alternatives a coalition can strive for. The model consists of comprehensive lists
463 of utility vectors available to each feasible coalition. While bargaining the players will
464 not only look to the possible achievements of a coalition which is presently being
465 discussed. They will also look to the payoff vectors of other coalitions and they will

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

466 argue with their outside options defined thereby. Therefore, the agreement finally
467 reached, whether inside the grand coalition or some smaller sub-coalition, eventually
468 reflects all options and possibilities available to the various coalitions evaluated with
469 respect to the players.
470
471 The formal description is provided as follows. A Cooperative Game Without Side
472 Payments or for short an NTU-Game is a triple ( I , P, V ) . Here, I is the set of players
473 (frequently assumed to be finite e.g., I = {1,..., n} ). P is a system of subsets of I which
474 is interpreted as the collection of feasible coalitions. Finally V : P → P ( n
) is the
475 coalitional function. This function assigns to every coalition a set of utility vectors.
476 Certain regularity assumptions are imposed upon the function V in order to render it
477 feasible for a “game”. For instance, as coalition S usually can assign utilities only to its
478 members, it makes sense to assume V ( S ) ⊆ nS . Also it is assumed that V ( S ) is
479 comprehensive, that is utility can be freely disposed of (formally: every vector
480 dominated by an element of V ( S ) belongs to V ( S ) ). In addition some version of
481 boundedness from above ensures that utility is not unlimitedly available. Convexity
482 assumptions also are quite common.
483
484 The economic context provides various examples. E.g. if
485
486 E = ( I , X , (u i )i∈I , (ai )i∈I ) (8)
487
488 is an exchange economy represented by a set of agents, a commodity space, and a
489 family of utility functions on commodities for each player, then we can construct the
490 corresponding NTU-Game. For each coalition we collect all the utilities available by
491 mutual exchange. Formally the market game V = V E is given by
492
⎧⎪ ⎫⎪
493 V ( S ) = ⎨(u i ( xi ))i∈S xi ∈ X (i ∈ I ), ∑ xi = ∑ a i ⎬ ( S ∈ P) (9)
⎩⎪ i∈S i∈S ⎭⎪
494
495 or, more technically, the comprehensive hull of this set (i.e., we admit free disposal of
496 utility). Thus, coalition S can attain all utilities for its members that can be obtained by
497 reallocating the commodities within this coalition. Market games obviously establish a
498 close connection between Game Theory and General Equilibrium Theory (see Section
499 5)
500
501 Within the context of NTU-Games, the class of bargaining problems is obtained by
502 admitting only the grand coalition I and the singleton coalitions {i}, (i ∈ I ) . Thus,
503 players can either join in the grand coalition or be on their own. Observe that it suffices
504 to specify two data in order to define a bargaining problem: the set U := V ( I ) of utilities
505 available to the grand coalition and the maximal utility ui that can be achieved by player
506 i ∈ I . A bargaining problem is, therefore, defined by a pair (u , U) , the status quo point
507 and the feasible set.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

508
509 The most important class of games is generated by admitting side payments. Imagine
510 that, within each coalition, the players are entitled and capable of exchanging utility on
511 a universal scale so that a unit of utility can be transferred from one player to another
512 one without changing the nature of its value. More precisely, whenever x ∈ V ( S ) holds
513 true, i is a player in S and ε is a small quantity of utility, then we assume that that the
514 vector ( x1 ,..., xi − ε,..., x j + ε,..., xn ) is an element of V ( S ) as well. It turns out
515 immediately that every V ( S ) is necessarily of the form
516
⎪⎧ n
⎪⎫
517 V(S ) = ⎨ x ∑ xi ≤ v (S ).⎬ (10)
⎩⎪ i =1 ⎭⎪
518
519 Here, v ( S ) is a real number, the utility assigned to coalition S . Obviously the function
520 V is specified once the function v : P → is defined. Therefore, we call the triple
521 ( I , P, v ) a cooperative game with side payments, with transferable utility, or for short a
522 TU-Game.
523
524 3.2 Solutions
525
526 We now turn to solution concepts. A solution in the general sense describes outcomes
527 of the bargaining process (see NTU-Games). This may involve varying vantage pints. A
528 solution may represent an evaluation of the bargaining power of players deduced from
529 the game, it may respect fairness considerations or principles of equity, expected gains
530 in some (vaguely defined) stochastic environment, or results of a specified procedure
531 involving arguments, counter arguments, objections and counter objections. Solutions
532 may also be defined as the result of a non-cooperative game which is based on the data
533 of the cooperative game and represents a bargaining process. A Nash equilibrium of
534 such a game may result in a solution of the cooperative game. The interpretation of this
535 non-cooperative Nash equilibrium may furnish a justification of the cooperative
536 bargaining solution resulting. All this can be formalized within the proper context.
537
538 The number of solutions concepts is considerable, their esteem is greatly at variance
539 among game theorists. There is, however, agreement that solution have to prove their
540 merits by the results they yield on a sufficiently rich class of games. Formally, a
541 solution concept is a mapping, point valued or set valued, defined on some class V of
542 NTU-games. If the set of players I is fixed, then
543
544 ϕ :V → I
or φ : V → P ( I
) (11)
545
546 defines a solution. That is, for every game there is an assignment (or a set of
547 assignments) of utility to each player “resulting” from the game.
548
549 Solution concepts should exhibit certain appealing properties expressed by conditions or
550 axioms. Ideally, they are uniquely defined by an appropriate set of axioms, this is the
551 axiomatic approach. Procedural approaches, definitions by extension of “natural” or

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

552 “canonical” concepts, solutions based on the economic tradition or the more
553 mathematical approach via invariance properties are also common.
554
555 Let us shortly mention solution concepts for bargaining problems which are suitably
556 called bargaining solution. A bargaining solution (see NTU-Games) obeys standard
557 axioms: as a mapping on (a subclass of) bargaining problems it commutes with
558 permutations of the players (i.e. the names of the players are irrelevant). It commutes
559 with positive affine transformation of I , i.e., follows a transformation of the scale.
560 Frequently it is Pareto efficient. That is, at the evaluation of a fixed game, eventually no
561 player can strictly improve his outcome unless another player suffers. Finally, one
562 requires individual rationality: no player accepts less than he can achieve by his own
563 efforts.
564
565 In general it needs a specifically defined further axiom in order to generate a uniquely
566 defined bargaining solution. The historically first and basic approach is provided by the
567 Nash solution (see NTU-Games). The decisive axiom is called “Independence of
568 Irrelevant Alternatives”. It determines an outcome on the Pareto surface of the feasible
569 set maximizing the coordinate product (relative to the status quo point’s coordinates).
570 Further solutions are the Kalai-Smorodinsky solution (uniquely characterized by a
571 weak monotonicity axiom) and the Maschler-Perles solution (defined by super-
572 additivity).
573
574 Meta Bargaining Theory and bargaining with Incomplete Information are modern
575 extensions of the traditional bargaining theory.
576
577 The first one strives to deal with the problem of choosing between various concepts of
578 bargaining theory on the basis of axiomatic or procedural treatment. In the context of
579 incomplete information, players are not fully informed about certain characteristics of
580 their opponents. These characteristics may describe the preferences or endowments of
581 other players. There may be a common prior concerning the distribution of such
582 characteristics and a single player gets some private information (reflected by a chance
583 move at the beginning of the bargaining process or the like) concerning his own data.
584 For short, each player knows his own “type” and has a priori probabilities about the
585 opponents’ types. Agreements may be registered with respect to certain types of
586 mechanisms, i.e., mappings generating decisions in dependence of all players’
587 announcements of their types. Now, the way to announce (and possibly misrepresent)
588 one’s type is dictated by strategic behavior. Hence, there appears a non-cooperative n-
589 person game, the equilibria of which may correspond to incentive compatible
590 mechanisms. The study of such mechanisms, axiomatic treatment etc. is the relevant
591 topic within this field (see also Section 6).
592
593 Solution concepts for general NTU-games are mainly discussed after the fashion of the
594 TU-concepts. We therefore postpone the discussion of this topic. (see the section on
595 NTU-Games).
596
597 4. TU-Games
598
599 4.1 Classification of games

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

600
601 A cooperative game with side payments or, for short a TU-game is formally represented
602 by a triple ( I , P, v ) . Here, I is the set of players, P ⊆ {S | S ∈ I } is the collection
603 (usually a field) of feasible coalitions, and v : P → , v (0)
/ = 0 , is the “characteristic”
604 or coalitional function(see TU-Games), frequently referred to as “the game” as well.
605 For most of our discussion we assume the player set to be finite, we use I = {1,..., n} .
606 Also, P can be viewed to be the power set of N , i.e., all coalitions are admitted for
607 cooperation. Intuitively, a coalition S ∈ P by agreeing to a contract about cooperation
608 can achieve (or is awarded) a worth or value v ( S ) . This worth is a monetary or
609 utilitarian quantity, all players see it on a universally accepted scale and arbitrary
610 quantities of this medium can be transferred by some (unidentified) mechanism between
611 the players. In view of these simplifying assumptions, the theory of TU-games is
612 extensively developed.
613
614 In addition, TU-games have been applied in the contest of cost sharing, which is rather
615 a topic of industrial organization. Here, coalitions may be subsidiaries or divisions of a
616 company or, more generally, groups of cost generating factors. The coalitional function
617 describes the cost generating structure. Thus it assigns “disutilities” (costs, expenses)
618 instead of utilities.
619
620 Concrete interpretations of the nature of a TU-game may depend on the context but also
621 on the mathematical nature of the set function v . For the sake of this discussion we
622 assume that v is nonnegative.
623
624 An additive set function m on P is, for finite player set I , tantamount to a vector
625 m ∈ n via
626 m ( S ) := ∑ mi ( S ∈ P)
i∈S
627 Within the framework of Cooperative Game Theory additive set functions (or
628 measures) are meant to represent distributions of utility. As a game, there are “trivial”
629 or “inessential” as cooperation does not improve a coalitions worth: for disjoint
630 coalitions S , T ∈ P we have m ( S ) + m (T ) = m ( S ∪ T ) .
631
632 The situation changes when we consider superadditive games characterized by the
633 defining inequalities υ( S ) + υ(T ) ≤ υ( S ∪ T )( S , T ∈ P, disjoint). In such games the
634 formation of coalitions is worthwhile as the total gains increase, thus players can expect
635 to achieve a larger share of utility by cooperation.
636
637 A subclass of superadditive games is provided by the class of balanced games. Call a
638 system of coalitions S ⊆ P balanced if there is a set of positive coefficients
639 (cS ) S ∈S such that
640
641 ∑ cS 1S = 1I (12)
S ∈S

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

642
643 holds true. We interpret the coefficients as an “intensity” to operate a coalition with.
644 Hence, a balanced system is a collection in which the players are running the various
645 coalitions with reduced intensity instead of joining in the grand coalition.
646
647 A game v is called balanced (SHAPLEY-BONDAREVA) if, for any balanced system
648 S ⊆ P and corresponding coefficients (cS ) S ∈S it follows that
649 ∑ cS v (S ) ≤ v ( I ) (13)
S ∈S
650
651 holds true. Verbally: it pays off to join within the grand coalition as there is no better
652 way to achieve the same utility by splitting into any balanced system and running the
653 respective coalitions with reduced intensities.
654
655 Next, a game is totally balanced if all restrictions v |S to coalitions S ∈ P are balanced.
656 The totally balanced games are formally equivalent with market games (DEBREU,
657 VIND, AUMANN, SHAPLEY-SHUBIK, (see TU-Games, NTU-Games). Thus, they
658 can be constructed as originating from a side payment or TU exchange economy or
659 market. In such a market a coalition is permitted to transfer utility by reallocating its
660 goods freely in order to achieve maximal joint utility. A coalitional function appears
661 which is indeed totally balanced. Formally, if E is defined as in (8), then analogously to
662 (9) the coalitional function v = v E is given by
663
⎧⎪ ⎫
i⎪
664 v ( S ) = max ⎨ ∑ u i ( xi ) xi ∈ X (i ∈ I ), ∑ ∑ ⎬ ( S ∈ P)
x i
= a (14)
⎩⎪i∈S i∈S i∈S ⎭⎪
665
666 Furthermore, totally balanced games appear as LP-games (OWEN). If a (positive)
667 linear programming setup L = ( A, b, c) is specified by an input-output matrix A , a vector
668 valued measure of resources b and an objective function given via a vector c , the
669 resulting LP-game is represented by a function v = v E via
670
671 v ( S ) = v ( A,b,c ) ( S ) = max{cx | x ∈ m
+ , Ax ≤ b( S )}, ( S ∈ P) (15)
672
673 That is, a coalition may pool its resources and use the production process represented by
674 the matrix A optimally in order to obtain the worth of joint production
675
676 A further representation of totally balanced games can be obtained by seeing them as
677 MIN-games. These are the minima of finitely many (σ−) additive functions,
678 say λ1 ,..., λ r via
679 v ( S ) = ¹{λ1, ..., λ r }( S ) ( S ∈ P ) (16)
680
681 This way, totally balanced games appear to be glove games (SHAPLEY): a minimum of
682 each resource (left hand gloves / right hand gloves) determines the amount of utility

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

683 (pairs of gloves) a coalition can achieve. Remarkably, all these classes are technically
684 equivalent. There are actually more representations generating totally balanced games,
685 e.g., as games on graphs (max flow – min cut setups, KALAI-ZEMEL).
686
687 A nice (proper) subclass of totally balanced games is provided by convex games
688 (SHAPLEY). Within this setup one discusses set functions with increasing marginal
689 worth: the marginal contribution of a player i to a coalition S , say v ( S ∪ {i} − v ( S ) ,
690 increases with increasing coalitions. In such games the incentive to form large (the
691 grand) coalitions is particularly compelling. In the cost sharing context (if we reverse
692 the sign or consider the difference of an additive and a convex set function) we obtain
693 concave cost structures which nicely concur with decreasing returns to scale.
694
695 A further subclass of cooperative games deserves a separate discussion because it is
696 important in the political rather than in the economic context. The class of simple games
697 consists of functions v : P → {0,1} . The interpretation is not so much that coalitions may
698 win a unit or not. Rather the idea is that a coalition S with v ( S ) = 1 is winning and all
699 others are losing coalitions. Simple games are used to describe group decisions in
700 political bodies, parliament, committees. Special simple games are directed games
701 characterized by a canonical procedure to impose an ordering on the bargaining power
702 of the players. A player is stronger than another player if his marginal contribution to
703 every coalition exceeds the one of his opponent.
704
705 Simple games that admit of a representation by voting strength, i.e, voting games, are of
706 particular interest. Such games are given by an additive set function m ≥ 0 representing
707 the distribution of votes over the players (parties in a parliament) and a majority level α .
708 A coalition is winning if its combined voting power exceeds α . The coalitional function
709 v = v α is thus given by
710
⎧1 if m ( S ) ≥ α
711 v (S ) = ⎨ ( S ∈ P) (17)
⎩0 if m ( S ) < α
712
713 It is easily seen that voting games are directed: a representation naturally induces an
714 ordering of the voting power (consistently, as players may be equivalent).
715
716 The representation of a voting game is by no means unique: the same coalitional
717 function obviously may result from a host of pairs (m , α ) . A traditional subclass which
718 admits essentially of a unique representation is given by the homogeneous games
719 (VON NEUMANN-MORGENSTERN). Call a representation (m , α ) homogeneous if
720 every minimal winning coalition has the same voting power. Then the set of players
721 disintegrates into three characters: dummies, sums, and steps. A dummy has no
722 marginal contribution to offer to any coalition. A sum is a player who, in a minimal
723 winning coalition, can be replaced by smaller players (hence, his weight is the sum of
724 the weights of smaller players). Everyone else is a step. Now the unique (“minimal”)
725 representation (OSTMANN) is essentially obtained by assigning 0 to the dummies and
726 1 to the smallest non dummy (who is a step). Then, recursively, sums are awarded a

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

727 canonical sum of the voting power of smaller players. With steps, this canonical sum is
728 exceeded by 1.
729
730 Within the political contest, the computation of a coalitional function resulting from the
731 ballot taken from a (huge) population is called assignment or apportionment.
732 Essentially, the votes assigned to the parties by an election define a coalitional function;
733 the procedure by which the distribution of votes in the parliament is defined (based on
734 the election results) assigns another coalitional function to the parties. In the political
735 context the procedures named d’Hondt, Hare, Imperiali, Danish, and others perform
736 such a task. Frequently, the coalitional function is not preserved, a problem sometimes
737 resulting in “paradoxical” assignment of votes (The “Alabama paradox”). If the
738 population game happens to be homogeneous, then the computation of the minimal
739 representation can also be seen as apportionment-and of course this always preserves
740 the coalitional function by definition.
741
742 4.2 Solutions
743
744 Having described various classes of games and a possible environment in which to
745 apply such classes, we now turn to solution concepts (see TU-Games, NTU-Games).
746 The idea to “solve” a game in the context of Cooperative TU-Game Theory is
747 tantamount to assign additive set functions to games. An additive set function yields an
748 award, utility or monetary value to each player. Thus, if W is a class of games and
749 A denotes additive functions, then
750
751 ϕ : W → A or φ : W → P ( A ) (18)
752
753 denotes a solution concept. This it not at variance with the definition provided by (11)
754 as (for a finite set of players) additive set functions are essentially vectors of I .
755
756 Again, point valued and set valued versions are being considered. The most universal
757 set valued solution concept is the core. Given a coalitional function υ , the core (see
758 TU-Games, NTU-Games, The Equivalence Principle) of v is the set C (v ) of all additive
759 functions dominating v with the same total mass. Intuitively: all distributions of utility
760 of the grand coalition (i.e. Pareto efficient distributions) which cannot be objected
761 against by smaller coalitions on the grounds that they could do better by cooperating
762 within their own ranks. Formally we have
763
764 C (v ) := {a ∈ A| a ( S ) ≥ v ( S ) ( S ∈ P ), a ( I ) = v ( I )} (19)
765
766 The core is nonempty if and only if the game is balanced; hence a further
767 characterization of balanced games occurs. As we have seen, a balanced game implies a
768 certain pressure towards organization in the grand coalition and the existence of a
769 nonempty core tells us that “organization” may be seen as to award players in a fashion
770 not to be contested by smaller coalitions: the core is a concept of stability. Totally
771 balanced games thus admit of a nonempty core for the subgame of every coalition. The
772 various manifestations of totally balanced games may be reflected by appropriate
773 properties of the core: in a TU market game the payoff obtained in a Walrasian

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

774 Equilibrium (of the exchange economy used for representation) is a core element. In an
775 LP-game, the shadow prices of the grand coalition applied to the vector valued measure
776 of initial assignments yields a core element (a close relative of the Walrasian
777 equilibrium). In a MIN-game the convex combination of those measures featuring
778 minimal total mass (of the grand coalition) yield a core element (again, this version is
779 related to the Walrasian equilibrium).
780
781 Convex games have a particularly nice core: the extreme points (geometrically
782 speaking) are being obtained by “bandwagon processes”: coalitions form successively
783 by one player joining and receiving his marginal contribution until the grand coalition is
784 achieved (SHAPLEY).
785
786 Mass phenomena are studied in the framework of large games. By these slogan we
787 mean that either limiting theorems are stated with respect to increasing (markets-, LP-)
788 games obtained by repeating the characteristics of the players. Or else, one considers
789 games on a continuum of players ( I is a measurable space, P is a σ -algebra of
790 measurable sets, and v is a function defined on P (usually non atomic)). Equivalence
791 theorems state the coincidence of solution concepts for “large games”. For example, to
792 say that the core is “equivalent” to the Walrasian equilibrium for large games, means
793 that the core shrinks toward the equilibrium payoff (the shadow price distribution) when
794 the market (and the resulting game) is properly replicated. Or, in the non-atomic
795 context, the statement means that the equilibrium payoffs (the dual solutions) are the
796 only ones surviving in the core.
797
798 By contrast, the core is usually the improper concept for the discussion of simple games.
799 Here, the core is empty unless there are veto players and if those are present, the core is
800 not very decisive.
801
802 Historically the first set valued solution concept is the vNM - Stable Set (von
803 Neumann-Morgenstern solution) (see TU-Games). This is a (not necessarily unique) set
804 of imputations (i.e., measures x with x ( I ) = v ( I ), x ({i}) ≥ v ({i}), i ∈ I ) with the
805 property of external stability and internal stability. The construction is much more
806 sophisticated, the class of games admitting of such solutions is as yet not specified.
807 However, traditionally zero-sum homogeneous games (even with multiple levels) admit
808 of such solutions (VON NEUMANN-MORGENSTERN). For convex games, the core
809 is the only vNM-Stable Set (SHAPLEY) and, more recently, this has also been
810 established for large (non atomic) exact MIN-games (i.e., all measures in the
811 representation having the same total mass). (EINY, HOLZMAN, MONDERER,
812 SHITOVITS). For non-exact MIN-games (in particular large ones, the non atomic
813 context) vNM-solutions can be established predicting cartelization in such games
814 (convex combinations of normalized measures absolutely continuous to the
815 representations), provided the representation is “orthogonal” (ROSENMÜLLER-
816 SHITOVITZ).
817
818 The first point valued concept to be mentioned is the Shapely value (SHAPLEY, see the
819 section on TU-Games). It cannot always be seen as a “solution” but rather represents an

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

820 “expected payoff”, “average marginal contribution”, or “power index” for players
821 facing a game. There is a closed formula which, for any given v , assigns
822
1 ( s − 1)!(n − s )!
823 Φi (v ) = ∑
n ! {S |i∈S } n!
(v ( S ) − v ( S − {i})) (20)

824
825 to player i ∈ I ( s := | S |, n := | I |) . On the other hand, the Shapely value admits of a
826 unique set of axioms defining it (Pareto efficiency, additivity, covariance with renaming
827 the players by a permutation, and zero assignment for dummy players). It can also be
828 seen as the unique extension of the uniform distribution defined as a solution for
829 “unanimous” simple games. The Shapley value also obeys an “equivalence principle”:
830 for large market games it converges towards the core.
831
832 There is an extensive version of the Shapley value for large or, more precisely, non
833 atomic games, the Aumann-Shapley value. Basically, the player set is a measure space
834 as above and one considers set functions of bounded variation as “games”. On certain
835 subspaces of such kind of games there exists a mapping assigning to each game a ( σ − )
836 additive measure and obeying certain axioms which generalize the finite version. A
837 value does not exist on all bounded variation games but typically on the subspace
838 generated (with respect to the bounded variation norm) by polynomials of non atomic
839 measures, on absolutely continuous games, etc. (AUMANN-SHAPLEY). The theory of
840 the Shapley value is now a new field extending classical functional analysis in
841 considering measure-valued functionals with certain invariance properties. A value
842 theory exists for countably many players as well, however, it is less developed as certain
843 pathologies prevent the generalization.
844
845 A further family of solutions is provided by nucleolus type concepts (SCHMEIDLER,
846 see the section on TU-Games). In order to obtain the nucleolus (very generally
847 speaking), one lists the excesses
848
849 e( S , x , v ) = v ( S ) − x ( S )
850
851 (“reasons to complain”) for any imputation x in a (weakly) decreasing order, say
852
853 θ ( x ) := (..., e( S , x , v ),...) (21)
854 Then the nucleolus ν is the unique imputation such that θ (•) is lexicographically
855 minimal, i.e.
856
857 θ ( ν) ≺ lexic θ( x) for all imputations x (22)
858
859 Thus, the largest complaints against a proposed solution are minimized, then the second
860 largest, etc.
861
862 The modified nucleolus or modiclus ψ lists bi-excesses
863
864 e( S , x, v ) − e(T , x, v )

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

865
866 and proceeds accordingly (SUDHÖLTER). The modified version involves the dual
867 game which reflects the preventive power derived from the original game.
868
869 We are not in the position to describe the abundance of solution concepts available. The
870 Kernel and Bargaining Set (with a great many variations) are defined in the spirit of the
871 nucleolus but emphasize different versions of bargaining. Objections and counter
872 objections are raised by the players and stability is achieved when arguments are in
873 balance (DAVIS-MASCHLER, PELEG). Axiomatizing many solution concepts (like
874 the core, the nucleolus and many others) has recently been successful on a large scale by
875 applying the construction of reduced games (PELEG, SOBOLEV).
876
877 The discussion of solution concepts for NTU-games is based on the results in TU-
878 territory. A general approach is to strive for a generalization of TU concepts. The core
879 of an NTU game accordingly can be defined as the set of Pareto efficient payoffs of the
880 grand coalition such that no smaller coalition can improve upon. As payoffs are
881 individual (not to the coalition) in the NTU context, the term “improve” has to be
882 clarified (all players are strictly better off or one player is strictly better off and all the
883 others do not lose).
884
885 For the Shapley value, the way to extend the concept to NTU framework is not unique
886 and involves fixed point procedures. (SHAPLEY, AUMANN, HARSANYI, KALAI-
887 SAMET, HART-MAS-COLELL) Nucleoli can be defined after a discussion of the
888 version of excess one wants to apply. The Bargaining Set and Kernel allow for various
889 modifications (GRANOT-MASCHLER, ZHOU). Thus, with NTU games in general,
890 solution concepts are less canonically defined; they are extensions of the TU-concepts
891 and differ in the various approaches.
892
893 5. The Equivalence Principle
894
895 5.1 Walrasian Equilibrium
896
897 The concept of a competitive or Walrasian equilibrium (see The Equivalence Principle)
898 is deeply rooted in the history of game theory and its influence on the development of
899 game theory cannot be overstated.
900
901 Moreover there are several interesting structural relations between the Walrasian
902 equilibrium and different solution and equilibrium notions of game theory. Nevertheless
903 the Walrasian equilibrium is not a game theoretical concept, it rather describes
904 equilibrium states of economic systems, briefly, economies. There are various models
905 with different levels of complexity to describe an economy. An economy is defined as a
906 list of data describing the set of agents in the society, the set of possible consumption
907 bundles in a vector space of commodities and the agents’ economic characteristics like
908 initial endowments of commodities, sets of technologically feasible production plans,
909 shares of agents in production possibilities and agents’ preferences over commodities
910 or, more generally, over allocations of commodities.
911

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

912 The case of an exchange economy E as defined before is particularly prominent because
913 of its role in the development of economic theory and due to the fact that it represents
914 many important features of more general economic system. Preferences over
915 commodity bundles are usually described by transitive, complete continuous binary
916 relations. These are often assumed to have in addition some monotonicity or convexity
917 property. Though such preferences are representable by utility functions u it is the
918 purely ordinal aspect which underlies most of the economic analysis. Different utility
919 representations generated from u by order preserving mappings from to do not
920 influence most parts of standard economic analysis.
921
922 Like in Game Theory also in Economic Theory one is interested in the results of
923 interaction of agents or markets. A fundamental difference lies in the fact that in
924 contrast to Game Theory neoclassical competitive Equilibrium Theory assumes agents
925 to be price takers and thereby explicitly precludes strategic interaction. Rather than
926 regarding other agents’ decision processes price takers just take prices as given and
927 maximize their preferences on the budget sets which they determine by evaluating their
928 initial endowments with the given price system.
929
930 To make the idea of a competitive market equilibrium precise consider a finite set I of
931 agents with initial endowments a i ∈ X = l+ and preferences represented by utility
932 functions ui : X → for all i ∈ I . Abbreviate (u i )i∈I and (ai )i∈I by u and
933 a respectively. A mapping x : I → X , a xi is called an allocation for E . The
934 allocation x is called attainable for the coalition S ⊂ I if Σi∈S xi = Σi∈S a i , it is
935 attainable if it is attainable for I . A large economy with many agents is defined in an
936 analogue way as an Aumann economy i.e. a mapping
937 E: (I, B (I ), λ ) → R+ × U : i
l
(a , u ) ≡ (a (i ), u ) where U is a space of continuous
i i i

938 utility functions endowed with some suitable topology, I is the closed unit interval with
939 its Borel- σ -algebra of admissible coalitions and the Lebesgue (probability)measure λ .
940 An attainable allocation is in this context an integrable map x : I → l
+ such that
941 ∫I x(i)d λ = ∫I a(i)d λ . Rather than counting agents now sets of agents are measured.
942 And instead of adding up their consumptions to total market consumption now
943 integration determines mean or average market consumption.
944
945 A Walrasian equilibrium for an Aumann economy E is a pair ( p∗ , x∗ ) , consisting of a
946 linear price functional p∗ on Rl and an attainable allocation x∗ for E such that for λ -
947 almost all i ∈ I (which is the technical version of “for all i”) x∗ maximizes u i ( xi ) on the
948 budget set Bi (ai , p ) := {xi ∈ R l+ | p∗ xi ≤ p∗ ai } . The definition of Walrasian equilibria
949 for a finite exchange economy is analogous.
950
951 The first existence theorem for a finite economy due to ARROW and DEBREU made
952 use of the construction of an abstract economy or generalized game and of the existence
953 proof of a social equilibrium for such a generalized game due to DEBREU. A

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

954 generalized game is an extension of a normal form game in such a way that every player
955 in the game cannot freely choose his strategy in his strategy space but is restricted to
956 some proper subset of it which varies in dependence of all other players’ strategy
957 choices. There may exist, therefore, strategy profiles which cannot be actually played
958 because some coordinates violate restrictions implied by other coordinates. Playability
959 already requires some consistency of choices. Among the consistent strategy profiles
960 social equilibria are defined exactly as Nash equilibria. A Walrasian equilibrium can be
961 seen as such a social equilibrium in a generalized (non-cooperative) game. It was shown
962 later by SCHMEIDLER that Walrasian equilibria of an economy may be even
963 represented as Nash equilibria of some suitably designed normal form game. This result
964 constitutes a non-cooperative example of the Equivalence Principle, which requires that
965 two different solution concepts single out always the same set of allocations on a whole
966 class of economies.
967
968 5.2 Walrasian Equilibria and Cooperative Solutions
969
970 The first instance of an equivalence theorem was Aumann’s Core Equivalence
971 Theorem stating the coincidence of Walrasian allocations and of core allocations for
972 Aumann economies. To state this result formally one needs a definition of the core for
973 an exchange economy. One way to proceed consists in extending to NTU-games the
974 solution concept of the core which has been defined in Section 3.2 for TU-games. Then
975 core allocations for the economy are defined as those allocations that induce utility
976 profiles in the core of the NTU-game V E induced by the economy E . Although order
977 preserving transformations of utilities induce transformed NTU-games also these
978 games’ core utility payoffs are exactly those induced by the core of the economy. An
979 alternative approach followed in large parts of the literature in economic theory is to
980 argue directly in the economy rather than its induced utility space. A very strong
981 equivalence theorem follows immediately.
982
983 Consider an Aumann economy E . Let x be an attainable allocation for E . An objection
984 of a coalition S ∈ B (I) to the allocation x is an allocation y that is attainable for S and
985 satisfies: u i ( y i ) ≥ u i ( xi ) for λ -almost every i ∈ S and λ ({i ∈ S | u i ( y i ) > u i ( xi )}) > 0 .
986 A counter objection of a coalition T ∈ B (I) to the allocation y is an allocation z that is
987 attainable for T and satisfies u i ( z i ) > u i ( y i ) for λ -almost every i ∈ T ∩ S and
u i ( z i ) ≥ u i ( xi ) for λ -almost every i ∈ T | S and λ ({i ∈ T | S | u ( z ) > u ( x )}) > 0 .
i i i i
988
989 An objection to which no coalition has a counter objection is called justified.
990
991 The Core C (E ) of the economy E (see TU-Games, NTU-Games, The Equivalence
992 Principle) is the set of attainable allocations to which there is no objection. The Mas-
993 Colell Bargaining Set MCB (E ) is the set of attainable allocations to which there is no
994 justified objection (see TU-Games). Moreover, denote by W (E ) the set of Walrasian
995 equilibrium allocations. Obviously by definition one has C (E ) ⊆ MCB (E ) . Also, for
996 finite exchange economies as well as for Aumann economies W (E ) ⊆ C (E ) holds true.
997 Hence W (E ) ⊆ C (E ) ⊆ MCB (E ) . Now Mas-Colell’s equivalence theorem states for

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

998 Aumann economies with monotonic preferences that W (E ) = MCB (E ) . An immediate


999 consequence is Aumann’s Equivalence Theorem, which asserts W (E ) = C (E ) .
1000
1001 While the set valued concept of the core reflects some stability of the resulting
1002 outcomes the value represents more the intuition of compromise of average. Despite
1003 their difference, formally and regarding their intentions, both concepts satisfy the
1004 Equivalence Principle. Value allocations may be distinguished according to whether
1005 exchange economies with (ordinal) preferences or with (cardinal) concave utility
1006 functions are considered and whether an induced TU-game or NTU-game is analyzed.
1007 A first weak equivalence result for the shapely TU-value and the utilities derived from
1008 Walrasian competitive allocations in TU-markets was given by AUMANN and
1009 SHAPLEY. After AUMANN distinguished between ordinal and cardinal value
1010 allocations he derived an equivalence theorem for Aumann economies between ordinal
1011 value allocations, cardinal value allocations and Walrasian allocations. Being based on
1012 differentiability assumptions this equivalence does not hold in the same generality as
1013 core or bargaining set equivalence. Without differentiability value allocations in
1014 Aumann economies are Walrasian but not vice versa (HART).
1015
1016 It is possible to axiomatize the set valued map from the class of Aumann economies
1017 with uniformly smooth preference which exactly singles out the Walrasian allocations
1018 (DUBEY and NEYMAN). Clearly the core, the value and the Mas-Colell Bargaining set
1019 satisfy these axioms. The value equivalence may be destroyed, however, when the
1020 Shapley value is replaced by the Harsanyi value (HART and MAS-COLELL).
1021
1022 5.3 Approximate and Weak Equivalence
1023
1024 The Equivalence Principle represented by the unique characterization of Walrasian
1025 allocations in atomless economies by various important solution concepts from game
1026 theory contains a remarkable insight. The strategic and cooperative options of agents
1027 represented by the different solution concepts totally lose their impact on the allocation
1028 process under the dominating power of perfect competition of many. Clearly, an
1029 atomless economy as well as atomless games are only abstractions and the immediate
1030 question arises as to what extent this equivalence remains approximately true when one
1031 goes from a continuum model to a large finite model. Historically the analysis of large
1032 finite economies preceded even that of the continuum case. Already 1838
1033 EDGEWORHT had argued that if a finite exchange economy is replaced by its n -fold
1034 replication, i.e. where identical copies of each agent appear, then the set of allocations,
1035 now called core, shrinks to the set of Walrasian allocations if n tends to infinity. This
1036 insight had been put into a rigorous theorem by DEBREU and SCARF in 1963. So
1037 Aumann’s Core Equivalence Theorem came as an elegant continuum version
1038 transmitting the same economic insight. The question remained whether also for large
1039 economies more general than the quite artificial replica economies an approximate
1040 equivalence would hold true. The first very general core limit theorem due to BEWLEY
1041 was followed by several others in 1970. The final answer is a result due to ANDERSON
1042 and to DIERKER which confirms the approximate equivalence and even allows to
1043 estimate the competitive gap between Core and Walrasian allocations for any given
1044 finite economy.
1045

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1046 The core Equivalence Theorem for general Aumann economies correctly reflects the
1047 fact that the gap between core allocations and Walrasian allocations converges to zero if
1048 the number of agents tends to infinity. It does not guarantee, however, that the core or
1049 the Walrasian allocations in such a sequence of increasing economies converge to the
1050 core of Walrasian allocation of the limiting Aumann economy. C (E ) and W (E ) may be
1051 much larger sets containing properly the sets C (E n ) and W (E n ) for the n -th economy in
1052 the sequence. Only for regular economies with smooth preferences that have only
1053 finitely many locally constant Walrasian allocations, even the convergence of the core
1054 and of the Walrasian allocations holds true. But neither this strong nor the above weaker
1055 convergence property of the core turns out to be true for the Mas-Colell bargaining set.
1056 Even in perfectly nice replica economies an analogue to the core shrinking is not true.
1057 More specifically the set of all individually rational Pareto optimal equal treatment
1058 allocations not in the Mas-Collel bargaining set converges to the empty set in the
1059 Hausdorff distance (ANDERSON, TROCKEL AND ZHOU). This result considerably
1060 weakens the explanatory power of Mas-Colell’s Equivalence Theorem.
1061
1062 The Core Equivalence Theorem has counterparts for several different economic
1063 scenarios. It holds true even for an infinite dimensional function space of commodities
1064 as was first shown by BEWLEY. If also the number of agents is infinite, a situation
1065 called “large square” by OSTROY, then depending on the sizes of the commodity space
1066 and of the space of agents core equivalence can be proved if markets for all
1067 commodities are thick, i.e. on each commodity market enough agents are active to
1068 suppress via perfect competition effective strategic interaction.
1069
1070 It is even possible to get (weak) core equivalence theorems for finite economies. This
1071 requires a special structure which allows us to represent them by specific TU Market
1072 games. Technique and arguments akin to those leading to core equivalence in finite
1073 linear production games (BILLERA and RAANAN, OWEN) then allow to prove finite
1074 convergence of the core of the game to its equilibrium utility payoff profile
1075 (ROSENMÜLLER, see the section on The Equivalence Principle).
1076
1077 5.4 The Nash Program
1078
1079 A weak approximate equivalence result between Walrasian equilibria and Nash
1080 equilibria of derived non-cooperative games in normal form different from Schmeidler’s
1081 game is based on earlier work on non-cooperative exchange due to SHAPLEY and
1082 SHUBIK. In these games quantities of commodities build the strategy sets and the Nash
1083 equilibria are called COURTNOT equilibria. For a sequence of increasing economies
1084 converging to an Aumann like continuum economy it can be stated that in many
1085 situations Cournot equilibria of the finite economies converge to a Walrasian
1086 equilibrium of the limiting continuum economy. The converse, namely that Walrasian
1087 equilibria of the continuum economy are limits of Cournot equilibria holds true
1088 whenever the Walrasian equilibria are regular.
1089
1090 One may interpret the Equivalence Principle in a broader sense. Then it relates any two
1091 solution concepts on certain classes of economies or of games to each other. An
1092 example for this is provided by what is called today the Nash Program. This term refers
1093 to Nash’s contributions in which he tried to support his cooperative bargaining solution

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1094 as a strategic equilibrium of some normal form game. It reflects the intention to give
1095 axiomatic solution concepts of cooperative game theory some solid basis by deriving it
1096 from suitably modeled non-cooperative strategic interaction. Several cooperative
1097 solution concepts can be supported in a non-cooperative way either by normal form
1098 games or by games in extensive form. The most popular example is the alternating
1099 bargaining game due to RUBINSTEIN, where some discount rate devaluates the payoff
1100 during the course of the extensive form game. If this discount rate is nearly negligible
1101 then the unique subgame perfect equilibrium of this game approximates closely the
1102 Nash solution (BINMORE, RUBINSTEIN AND WOLINSKI). A direct support of the
1103 Nash solution as the unique strict Nash equilibrium can be established via the
1104 interpretation of an n -person bargaining game as a special economy whose unique
1105 Walrasian allocation coincides with the Nash solution (TROCKEL). Most of the results
1106 guaranteeing the support of a cooperative solution concept by equilibria of non-
1107 cooperative games can even be interpreted as mechanism theoretical implementations.
1108 To achieve this goal an outcome space has to be available and the cooperative solution
1109 concept must be representable by some social choice rule. These concepts play a central
1110 role in the next section.
1111
1112 6. Mechanism Theory
1113
1114 6.1. Historical Background
1115
1116 Although mechanism theory is now established as a part of game theory its roots are
1117 laying in the theory of markets.
1118
1119 The idea of a mechanism originated from discussions in the 1930s concerning the
1120 economic feasibility of socialism. Market versus planning mechanisms were advocated
1121 in a by now famous debate between the economists FRIEDRICH VON HAYEK,
1122 OSKAR LANGE, ABBA LERNER and LUDWIG MISES. Informational aspects of
1123 decentralization built the starting point for a search for suitable mechanisms to organize
1124 an economy. But a class of objects called “mechanisms” which could be compared as to
1125 their effectiveness in organizing markets in a decentralized way was lacking. And
1126 despite KOOPMANS’ use of the term “allocation game” for the description of the
1127 tatonnement in competitive markets or centrally planned economies neither games nor
1128 mechanisms as technical terms in their present understanding were used to formalize the
1129 problem. The first formal version of a mechanism was that of an adjustment process. At
1130 each point of (discrete) time any of n agents in the economy is supposed to send a
1131 message from a given message set. This message is publicly observable as is the state of
1132 the economic environment e = (e1 ,..., en ) that represents the exogenously given
1133 economic data like preferences, endowments or production possibilities. A set of
1134 response functions f i , i = 1, ... , n, determines the agents’ messages at the next period
1135 of time via mi (t + 1) = f i (m(t ), e), i = 1,..., n , where m(t ) = (m1 (t ),..., m n (t )) denotes
1136 the agents’ messages at time t .
1137
1138 The first version of a mechanism is a simple dynamic one which still lacks any strategic
1139 features. Subsequently a static version was considered where the fti where the same for

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1140 all t and a stationary message profile m∗ satisfying


1141 ( )
m∗ (e) = ( f 1 ( m1∗ (e1 )),..., ( f n ( m n∗ (e n )) =: f m∗ (e) represented a privacy preserving
1142 equilibrium where for each agent knowledge of his own characteristics and the message
1143 profile provides sufficient information for the choice of his message. Still this
1144 verification procedure had no strategic ingredients
1145
1146 Yet, the availability of a concept of a mechanism led to several interesting questions
1147 starting from the basic theorems of welfare economies. These theorems state that
1148 perfectly competitive prices induce a Pareto efficient allocation and that any Pareto
1149 efficient allocation may be induced by suitable competitive prices in neoclassical
1150 economic environments. Now one could ask whether there are other decentralized
1151 mechanisms, more effective in terms of information processing than the Walrasian
1152 tatonnement; or others defined on larger economic environments than the classical one,
1153 that would lead to Pareto efficiency. Also other desirable properties like for instance
1154 fairness or envy-freeness could replace Pareto efficiency as a social goal to be
1155 implemented by suitable mechanisms. Also an extension of interest from the economic
1156 system to general social systems could be considered. A next step towards our present
1157 day notion of mechanism was, accordingly, the idea of realizing a given social goal
1158 represented by some correspondence F through an adjustment process and some
1159 outcome function h . Then h(m∗ (e)) ∈ F (e) determines the allocation on the basis of the
1160 stationary message profiles.
1161
1162 It is the lack of incentive aspects that distinguishes this realization of socially desired
1163 goals from implementation by a mechanism formalized as a game form. The agents’
1164 behavior is prescribed here by the response function rather than freely chosen on the
1165 basis of preferences and strategic decisions. The inclusion of the important aspect of
1166 strategic choice of messages was caused by SAMUELSON’s skeptical attitude towards
1167 the Lindahl equilibrium, the analogue of the Walrasian competitive equilibrium for
1168 economies with public goods. He pointed to likely misrepresentation of preferences in
1169 the special case that agents’ messages are reports on their own preferences.
1170
1171 In today’s terminology SAMUELSON claimed the incentive incompatibility of the
1172 Lindahl equilibrium. Obviously, an adjustment process is not the adequate concept for
1173 dealing with incentive problems. A natural approach appears to be one vial non-
1174 cooperative games: The sets of strategies for the n agents (now players) are sets of
1175 preference relations on allocations on allocations including their true preferences. This
1176 specific kind of game called direct revelation game or just direct game allows
1177 interpretion of a player’s message as his lying or telling the truth.
1178
1179 It turned out later that not only SAMUELSON was right, but that whenever truth telling
1180 is a Nash equilibrium in a direct game it is even an equilibrium in dominant strategies.
1181 This means that truth telling, if it is consistent with the Nash equilibrium, it is even
1182 optimal for each player independent of whether the others tell the truth or lie. There are
1183 no truth telling equilibria in direct games with public goods and, moreover, the same
1184 happens in classical private good exchange economies – provided the rules are designed
1185 in such a way that truth telling is individually rational as well as Pareto efficient. Pareto
1186 efficiency, which one has automatically when using dominance equilibrium, has to be

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1187 added as a desirable criterion when one is dealing with the Nash equilibrium. The
1188 impossibility results in the framework of neoclassical private goods and public goods
1189 environments allowed for several possible reactions. Would the main picture change for
1190 large economies? Would similar impossibility hold true in a social choice framework
1191 where the continuum of allocations would be replaced as an outcome space by some
1192 finite set?
1193
1194 Would a change of the employed equilibrium concept, i.e. Nash equilibrium rather than
1195 dominance equilibrium lead to an improvement?
1196
1197 For neoclassical private goods economies it turned out that incentive to misrepresent
1198 preferences tends to zero when the number of agents in the economy becomes very
1199 large. In public goods economies, on the contrary, the incentives to misrepresent
1200 preferences increase with the number of agents (see Mechanism Theory).
1201
1202 6.2 Implementation of Social Choice Rules
1203
1204 In this section the focus is on social choice rules and their implementation by
1205 equilibrium outcomes of suitable normal form games.
1206
1207 Let I = {1,..., n} now be the set of players’ positions, A be a non-empty set, called
1208 outcome space and M i sets of possible messages mi among which a player in position
1209 i may choose, i ∈ I .
1210
1211 The outcome space A represents all possible social states for a n -person society. A
1212 social choice rule associates with any profile u = (u1 ,...u n ) of utility functions
1213 u i : A → a set of states considered as socially desirable for a population represented
1214 by u . To formalize this idea, let U i be non-empty sets of utility functions on A
1215 representing all admissible utility functions that players in position i may have, i ∈ I .
1216
1217 Consider U ⊆ U 1 × ... × U n . A (point valued) mapping f : U → A is called social choice
1218 function. A set valued mapping F : U → P ( A) is called social choice rule.
1219
1220 The planner’s task is to make sure that any population of rational agents represented by
1221 some u ∈ U that obeys the rules designed by him automatically realizes some social
1222 state in F (u ) . To formalize this idea one needs the concept of a game form. For this one
1223 considers a map g : M 1 × ... × M n → A m := (m1 ,..., m n ) g (m) . Such a map is called
1 n
1224 an outcome function. The list ( M ,..., M , g ) is called game form or mechanism.
1225 Because of the bijective association between ( M 1 ,..., M n ) and M := ∏i∈I M i one
1226 denotes a mechanism alternatively also by ( M , g ) . The following observation is
1227 crucial for mechanism theory. For each admissible profile of utility function, i.e. for
1228 each u ∈ U , the mechanism ( M , g ) induces a game Γ g ,u in normal form defined by

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1229 Γ g ,u := ( M 1 ,..., M n ; u1 g ,..., u n g) . For obvious reasons Γ g ,u is also


1230 denoted ( M , u g) .
1231
1232 Denote by NE (Γ g ,u ) the set of Nash equilibria of Γ g , u and the Nash equilibrium
1233 outcomes and payoffs for Γ g ,u respectively, NO (Γ g ,u ) := g ( NE (Γ g ,u )) and
1234 NP(Γ g ,u ) = u o g(NE(Γ g ,u )) . Note, that in general the sets NO (Γ g ,u ) and
1235 NP (Γ g ,u ) vary with u as does F(u ) .
1236
1237 The planner’s design problem is it to find a g such that equilibrium behavior in any
1238 game Γ g ,u with u ∈ U induces a socially desirable outcome. The notion of Nash-
1239 implementation of a Social Choice Rule makes this idea precise.
1240
1241 A mechanism ( M , g ) Nash-implements a social choice rule F on the domain U if
1242 NO (Γ g ,u ) ⊆ F (u ) for all u ∈ U .
1243
1244 There is some disagreement in the literature about whether this (weak) implementation
1245 is the right concept or rather full implementation requiring equality instead of only
1246 inclusion.
1247
1248 Notice, however, that NO (Γ g ,u ) is not a satisfying non-cooperative solution concept by
1249 its own. It rather collects all singleton-valued equilibria, each of which is a solution
1250 concept and if played excludes simultaneous play of the other equilibria. Therefore,
1251 consistent strategic behavior in the framework of non-cooperative games can only result
1252 in some point in F (u ) and never cover all of F (u ) . However in the case of weak but not
1253 full implementability of a social choice rule one might want to understand why some
1254 points of F (u ) result from equilibrium behavior while others do not. In case of full
1255 implementation this problem does not arise. For social choice functions both notions of
1256 implementation coincide. Several contributions in the literature are concerned with
1257 providing sufficient and necessary conditions for the Nash-implementability of a social
1258 rule (see the section on Mechanism Theory).
1259
1260 Replacing the concept of the Nash equilibrium by some of its refinements like
1261 dominance equilibrium, undominated Nash equilibrium, strong or strict Nash
1262 equilibrium leads to the analogous notions of weak and full implementation.
1263
1264 6.3 The Revelation Principle
1265
1266 One of the most well known and most applied (group of) result (s) of mechanism theory
1267 is the Revelation Principle (see Mechanism Theory).
1268
1269 It is represented by some formal theorems ascertaining for various notions of
1270 equilibrium varying assumptions versions of replacement of implementing mechanisms
1271 by direct mechanisms. The general value of this principle lies in the fact the planner

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1272 when looking for a suitable mechanism may restrict his search to the much smaller
1273 family of direct mechanisms. The problem with the Revelation Principle lies in the fact
1274 that in the class of situations which allows its strongest and most satisfying version this
1275 leads to general impossibility results, while in case where it would be most helpful only
1276 such versions hold true which have considerable drawbacks. This dichotomy concerns
1277 dominance equilibria versus Nash or Bayesian Nash equilibria. Once the equilibrium
1278 concept is fixed literally the same version of the revelation principle can be stated. But
1279 clearly its truth depends on the employed equilibrium concept. While the dominance
1280 equilibrium by definition is unique a game may have several Nash equilibria. Bayesian
1281 or Bayesian Nash equilibria are Nash equilibria of normal form representations of so
1282 called Bayesian games (see Mechanism Theory Stochastic and Repeated Games). In
1283 contrast to a standard normal form game where there is common knowledge among the
1284 players about the data of the game including all players’ payoff functions one may
1285 formalize strategic interaction in situations of incomplete information about the
1286 characteristics of the fellow players. This may be expressed by probability measures the
1287 players have on their co-players’ characteristics. Now in this framework not only payoff
1288 functions but also these probability distributions are part of players’ characteristics. The
1289 characteristics of a player are called his type.
1290
1291 HARSANYI’s idea to model strategic scenarios under incomplete information as
1292 Bayesian games with consistent beliefs, i.e. player’s probability distribution on all
1293 players’ type space as marginals of the same a priori distribution was consistently
1294 carried out by MERTENS and ZAMIR and later by BRANDENBURGER. A sketchy
1295 description of a simple special case follows. A Bayesian game B is given by a list
1296 ( M 1 ,..., M n , θ1 ,..., θ n , p1 ,..., p n , C1 ,..., C n ) with the following interpretation:
1297
1298 M i and θ i are players i ’s finite action set and finite type space, respectively. There is
1299 some ϑi ∈ θ i determining ( pi , C i ) uniquely, i.e. ( pi , C i ) ≡ ( pi (ϑi ), C i (ϑi )) . Here p i is
1300 a probability measure on ∏ nj =1, j ≠i θ j and C i : M 1 × ... × M n → R is player i ’s payoff
1301 function. A strategy s i ∈ S i is a map from θ1 × ... × θ n into M i , i ∈ I . A Bayesian Nash
1302 equilibrium is a strategy profile, which for every type of every player optimizes his
1303 expected payoff given the other players’ equilibrium strategies. In fact this is a Nash
1304 equilibrium of a suitably defined normal form game Γ M ,θ , p,C .
1305
1306 Define the functions C i : S1 × ... × S n → R by
1307 C i ( s1 ,..., s n ) = Σ ϑi ∈θ i ∫ θ − i u i (ϑi )( s1 (ϑ−1 ), s 2 (ϑ−2 ),..., s n (ϑ− n ))dpi .
1308
1309 Then a Bayesian Nash equilibrium of the Bayesian game defined above is a Nash
1310 equilibrium of the normal form game Γ = (S1 ,..., S n , C1 ,..., C n ) . As any Bayesian game
1311 has a normal form representation a Bayesian equilibrium may be seen as a special Nash
1312 equilibrium. On the other hand, if there is common knowledge among the players about
1313 all players’ types then p1 = ... = p n degenerates to a Dirac measure with total mass one
1314 on the true type profile. Expectation building as in C i becomes trivial. Also every

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1315 normal form game allows representation as a Bayesian game with degenerate type
1316 spaces and beliefs. In this sense any Nash equilibrium is a Bayesian Nash equilibrium. It
1317 is this fact which has been employed by REPULLO to establish versions of the
1318 Revelation Principle for Bayesian as well as for Nash equilibrium.
1319
1320 The basic idea of the Revelation Principle is it to replace some game form that
1321 implements a social choice rule, i.e. which has only socially desired equilibria, by a
1322 direct game form, where truth telling constitutes an equilibrium. Depending on the
1323 employed equilibrium concept, weak or full implementation, the number of equilibria
1324 and whether the social choice rule is even a social choice function, one may derive
1325 results of different generality under the heading Revelation Principle. The power of this
1326 principle is very often overestimated as its different versions are often not distinguished.
1327 For a better understanding a second, weaker notion of implementation for direct games
1328 is needed. Let F : U → P ( A) be some social choice rule and E some equilibrium
1329 concept. The direct mechanism (U , g ) truthfully E –implements F , if for each utility
1330 profile u ∈ U the strategy profile u ′ ≡ u , i.e. the truth is an E -equilibrium satisfying
1331 g (u ′) ≡ g (u ) ∈ F(u ) . Note that truthful implementation in case of multiple equilibria is
1332 consistent with other non-truthful equilibria having outcomes outside F(u ) . Moreover
1333 such a non-truthful equilibrium may even Pareto-dominate the truthful one.
1334
1335 If in a direct mechanism implementing a social choice rule the E -equilibrium is always
1336 unique then truthful implementation coincides with weak implementation. In this case
1337 the revelation principle is quite strong. In fact, the E -equilibrium is then even in
1338 dominant strategies and the mechanism is called incentive compatible. In the general
1339 situation, however, truthful E -implementation is much weaker than E -implementation
1340 and it is more than euphemistic to claim that restriction to a direct mechanism is
1341 possible without loss of generality. Nevertheless the revelation principle finds frequent
1342 use in many applications of mechanism design like the theories of auction design,
1343 contract design or voting (see Mechanism theory).
1344
1345 7. Repeated Games
1346
1347 7.1 Evaluations
1348
1349 The general stochastic game with infinite horizon offers a host of new problems.
1350 Convergence of the payoffs can be established by either discounting or averaging and
1351 both versions exhibit fascinating aspects.
1352
1353 The version with discounted evaluation
1354

1355 C i ( x, y ) = (1 − ρ )∑ ρ t −1 f i ( xt −1 , yt ) (23)
t =1
1356
1357 is thought to be much closer to the finite game: after waiting sufficiently long the
1358 payoffs in the tail game are so heavily discounted that they do not bear relevant
1359 influence. This observation can be made precise and is the basis to the first approach
1360 due to SHAPLEY who established the value of a stochastic zero sum game. Here every

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1361 player observes a finite set of matrix games repeatedly played, the transition being
1362 Markovian.
1363
1364 The payoff indicated by the averaging evaluation
1365
1 T i
1366 C i ( x, y ) = lim ∑ f ( xt −1 , yt )
T →∞ T t =1
(24)

1367
1368 (with various definitions concerning the limiting procedure) offers the modeling of a
1369 much more insecure future. No essential payoff is contributed to every finite round of
1370 initial games. The essential payoffs will be achieved in the far away future and the
1371 nearby present may be used for signaling, agreements about contracts, punishments for
1372 violation of such contracts, indication of regret and the like.
1373
1374 The transition between discounted versions and average versions technically is
1375 manipulated by a “Tauberian Theorem”, which links the behavior of the coefficients of
1376 a power series with the limiting behavior of the corresponding holomorphic function.
1377 This way GILETTE established the first connection between the values of the two types
1378 of stochastic games. The asymptotic behavior of the value of the discounted game when
1379 the discount factor approaches 1 turns out to be the key to treating the average situation.
1380 The value is algebraic in a neighborhood of the limit (BEWLEY-KOHLBERG) and, as
1381 a consequence, it turns out that the general stochastic zero sum game with averaging
1382 evaluation has a value (NEYMAN-MERTENS).
1383
1384 7.2. Folk Theorems
1385
1386 For the non-zero sum case the situation is more complicated. A first approach is
1387 established by what is known as a “Folk Theorem” because the concrete copyright is not
1388 easily established. According to this theorem one considers a bimatrix game infinitely
1389 often repeated with average evaluation (the supergame). It is not hard to see that the
1390 payoffs constitute the convex hull of the bimatrix game’s payoffs, meaning that the
1391 repeated game yields the same payoffs as correlated strategies in the one shot game. The
1392 payoffs resulting in the latter game can be interpreted as a bargaining problem (cf.
1393 Section 3). Now it turns out that all the individually rational payoffs of this bargaining
1394 situation can be obtained by Nash equilibrium payoffs of the supergame.
1395
1396 To some extent this result is disappointing, as Nash equilibria do not even yield Pareto
1397 efficient payoffs, not to speak of a “solution”. The attempt to improve upon this
1398 situation (RUBINSTEIN) by introducing subgame perfect strategies, however, yields
1399 the same result.
1400
1401 7.3 Repeated Games with Incomplete Information
1402
1403 With incomplete information stochastic games are even more difficult to approach. The
1404 field which is surveyed most extensively is the one of repeated games with incomplete
1405 information. This technical term denotes a type of game in which the players are facing
1406 an information structure which is established by chance at the beginning of the game.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1407 We may think of a finite set of matrices (bimatrices), the games played in the various
1408 states of nature. One of the states is chosen by chance and the corresponding game is
1409 then being played repeatedly. In addition, there is an information matrix which reveals
1410 certain data about the true matrix (the true game) according to the actions chosen by
1411 both players. Both players observe these signals and they may add certain messages of
1412 their own by playing certain “natural” sequences of actions hinting towards their
1413 observations.
1414
1415 Given the chance moves the players may form priors concerning the true game which
1416 can now be updated in view of the signals they receive from the opponents.
1417
1418 For the zero sum case an essential result is presented by the vex-cav Theorem
1419 (AUMANN-MASCHLER, MERTENS-ZAMIR). Consider the value of the expected
1420 game presented by the mixture of the states of nature. Now, when the distribution
1421 varies, the value of the expected game appears as a function on the probability simplex.
1422 We may define the lower convex envelope of this function to be the largest convex
1423 function dominated by the value function.
1424
1425 The vex-cav Theorem states that the value of the repeated game exists if the successive
1426 formation of lower convex and upper concave envelope of the value function as
1427 described above commutes.
1428
1429 Again, for the non zero-sum game the situation is more difficult to analyze. As in the
1430 Folk Theorem, the idea of cooperation in the canonical bargaining situation (with
1431 incomplete information) defined by the information structure and the prior about the
1432 truth game can be formulated. Again one strives to establish connections between Nash
1433 equilibria of the repeated game and individually rational payoffs of the cooperative
1434 game.
1435
1436 Cooperation is easily described: A mechanism in this context is given by a set of
1437 correlated strategies each one conditioned on the announcement of the players with
1438 regard to their observations. The corresponding payoff is the expectation generated by
1439 the original distribution of the chance move choosing a game and the observations
1440 resulting thereof. However, players may choose to misrepresent their type, so we have
1441 to consider incentive compatible mechanisms in the context of a bargaining situation
1442 with incomplete information (cf. Section 6).
1443
1444 On the other hand certain types of Nash equilibria (joint plan equilibria) can be
1445 formulated in the repeated game. A joint plan is a triple consisting of the following data.
1446 There is a set of finite sequences of actions of each player serving as signals. Next, there
1447 is a response kernel which stochastically yields such signals depending on the players’
1448 observations. And finally, there is a contract which, depending on the joint signals,
1449 yields actions in the repeated game.
1450
1451 It is rather obvious that a joint plan induces a mechanism: composing or mixing the
1452 choice of signals with the contract which bases (correlated) actions on signals obviously
1453 constructs a correlated choice of actions based on observations of signals, i.e., a
1454 mechanism in the above sense.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1455
1456 On the other hand, a joint plan can be implemented by a joint plan Nash equilibrium of
1457 the repeated game in the following way. The equilibrium yields the same payoffs as the
1458 mechanism which corresponds to the joint plan. Also it generates the signals of the joint
1459 plan in the first stages of the game (when played in equilibrium). Moreover, the
1460 distribution of the equilibrium path of the process yields certain sequences of actions
1461 “agreed upon” the frequencies of which are constructed in a way to imitate the
1462 correlations prescribed by the contract. Hence, the cooperative payoff resulting fro the
1463 plan via he induced mechanism is also achieved in the non cooperative equilibrium
1464 implementing the joint plan.
1465
1466 An early result is due to AUMANN-MASCHLER-STEARNS and has been generalized
1467 by SORIN and later by SIMON. Accordingly, in a 2-person game with incomplete
1468 information on one side, there exists an incentive compatible joint plan which is also
1469 individually rational and admits of a corresponding Nash equilibrium (see Stochastic
1470 and Repeated Games).
1471
1472 8. Evolution and Learning in Games
1473
1474 8.1. Introduction
1475
1476 One important strength of non-cooperative game models lies in the fact that, although
1477 equilibrium behavior of players is defined and of central concern in the analysis, also
1478 non-equilibrium behavior is possible. The very definition of a Nash equilibrium does by
1479 no means imply that it is realized in an actual play of the game nor that persons playing
1480 that game have to act as rational players. It is exactly this feature which allows it to
1481 analyze non-equilibrium behavior and, in a dynamic context, convergence or divergence
1482 properties of chosen strategy profiles. Still in these cases the persons playing the game
1483 choose, even if not necessarily in a rational, consistent or only purposeful way, their
1484 strategies from given strategy sets.
1485
1486 Evolutionary biology in contrast uses the formal concept of a game, in static as well as
1487 in dynamic models, in quite a different way. Imagine some large population of
1488 individuals (of some species) each of which is labeled with some number i ∈{1,..., n} ,
1489 which is interpreted to represent some type. Now consider some n × n -matrix with
1490 entries aij , i, j = 1,..., n . From the point of view of standard game theory one can imagine
1491 a random device by which two individuals, labeled i and j are selected from the
1492 population to play the normal form bimatrix game, where i chooses one of n rows and
1493 j chooses one of n columns as strategies. If i chooses h(i ) and j chooses k ( j ) the
1494 resulting payoff vector is (ah (i ),k ( j ) , ak ( j ),h (i ) ) . Obviously, this game is direct as the sets
1495 of strategies and of types coincide.
1496
1497 Now evolutionary biology deviates in two respects. First, the two chosen individuals do
1498 not have any choice, they just are programmed to pick h(i ) ≡ i and k ( j ) ≡ j , i.e. to tell
1499 the truth about their labels. So they do not play the game although this terminology is
1500 used in large parts of the literature. Clearly, no outside observer seeing i and j picking

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1501 h(i ) ≡ i and k ( j ) ≡ j could tell whether they play the game or follow some deterministic
1502 device. Secondly, the interpretation of the payoffs is now different. Rather than money
1503 or utility now fitness is considered to be the medium of payment.
1504
1505 Fitness is a stylized index representing reproductive success. Frequently the expected
1506 number of offsprings of an individual is taken to define fitness, sometimes however
1507 more refined and detailed definitions are used. In a dynamic context fitness paid out to
1508 some strategy increases the probability that an individuum labeled by that strategy will
1509 be chosen to be thrown as a strategy into the next round of the game.
1510
1511 A probability measure on the population determines the distribution of labels, from
1512 which the two “players” are drawn as a sample of size two. If this measure is
1513 concentrated on one label evolutionary biologists are talking of a monomorphic state of
1514 the population. Otherwise the state is polymorphic. SELTEN and HAMMERSTEIN in
1515 their chapter Game Theory and Evolutionary Biology of the Handbook of Game Theory
1516 stress the fact that a mixed strategy admits a monomorphic and a polymorphic
1517 interpretation. In the context described so far a mixed strategy is just a distribution over
1518 labels which are strategies. So it can be seen as a polymorphic state of the population.
1519 But one can obviously extend the framework sketched above by giving each individual
1520 a label which is a mixed strategy. The interpretation is that the program which rules the
1521 individuals’ behavior is not anymore deterministic but stochastic. Now, an individual is
1522 (characterized by) a mixed strategy. In this framework a measure on the population
1523 defines a distribution over labels, i.e. mixed strategies. Then, a distribution
1524 concentrated on one mixed strategy represents a monomorphic state.
1525
1526 The first formalization of evolutionary stability due to JOHN MAYNARD SMITH and
1527 GEORGE R. PRICE has become the predominant concept of evolutionary game theory.
1528 If the label of some individuum in a population is meant to indicate some innate type of
1529 behavior then stability of this behavior type is defined as stability of the strategy that is
1530 representing it in the bilateral game. Stability of a monomorphic population, i.e. of the
1531 only type or label in the population is meant as immunity against the invasion of
1532 mutants. In a polymorphic population this idea extends to immunity against the invasion
1533 of mutants and perturbed versions of the incumbent type. Evolutionary strategies can be
1534 characterized in a static normal form context as well as in a dynamic context.
1535
1536 8.2 Evolutionary Stable Strategies
1537
1538 Although evolutionary stability is a dynamic concept it can be represented in a normal
1539 form game. Let σ,σ′ be mixed strategies of the bimatrix game described above, i.e.
1540 distributions on {1,...n} . Let A = (aij ), i, j = 1,..., n denote the payoff matrix for the row
1541 player. His expected payoff from σ if the column player is labeled σ′ is σAσ′ .
1542
1543 Now a mixed strategy σ∗ is an evolutionary stable strategy (ESS) if for all mixed
1544 strategies σ there is some ε > 0 such that for all ε ∈ (0, ε]
1545
1546 σ∗ A[(1 − ε)σ∗ + εσ] > σA[(1 − ε)σ∗ + εσ] (25)

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1547
1548 This condition expresses the idea that after a monomorphic population represented by
1549 the mixed strategy σ∗ has been changed by invasion of a small fraction of mutants
1550 programmed to play σ , still σ∗ is more effective than σ . It is not hard to see that for an
1551 ESS σ∗ the pair (σ∗ , σ∗ ) is a Nash equilibrium of the symmetric game represented by A .
1552 Moreover, this Nash equilibrium has an additional property. Every strategy σ′ ≠ σ∗ that
1553 is an equally good response to σ∗ , i.e. that satisfies σ∗ A σ∗ = σ′A σ∗ , is necessarily a
1554 worse response to σ′ than σ∗ , i.e. σ∗ A σ′ > σ′A σ′ .
1555
1556 In fact, for any Nash equilibrium (σ∗ , σ∗ ) with this additional property σ∗ is
1557 evolutionary stable. As not every equilibrium strategy needs to be evolutionary stable
1558 the concept of an ESS defines a refinement of the Nash equilibrium. Unfortunately, in a
1559 non-negligible class of non-pathological games ESS do not exist. A further drawback
1560 lies in the fact that the notion of ESS is defined only for monomorphic states of the
1561 population. These insights led evolutionary biologists to an explicit dynamic analysis. In
1562 a dynamic discrete time model the game described above is used repeatedly to
1563 determine the payoffs of two individuals based on their strategy labels. The distribution
1564 over labels, however, changes during time depending on the payoffs of the previous
1565 round.
1566
1567 High payoffs to individuals turn into higher probabilities of their label to be selected for
1568 the next round. If this idea is modeled rigorously it leads to a dynamical system which is
1569 known as Replicator Dynamics. A corresponding model can be built for continuous
1570 time. It can be shown that in the framework sketched above any mixed strategy that is
1571 an ESS represents a population state that is asymptotically stable with respect to the
1572 replicator dynamics. This surprising result shows that the ESS defined in a static
1573 monomorphic mixed strategy setting when interpreted as a population state in a pure
1574 strategy population displays a polymorphic dynamic local stability (see Evolution and
1575 Learning in Games).
1576
1577 8.3 Learning in Social Contexts
1578
1579 There are essentially two problems in social systems to which methods and insights
1580 from evolutionary biology are applied. What evolutionary forces lead to behavior and
1581 social states that are compatible with perfectly rational interaction of players? And, if
1582 there are multiple equilibria, which ones are limit points of evolutionary processes? The
1583 first problem may be seen as inducing search for a specific equivalence principle or,
1584 alternatively, as a problem of evolutionary implementation of rational solutions. The
1585 second problem falls into the realm of equivalence refinement.
1586
1587 Obviously, the extreme position of evolutionary biology that treats individuals as
1588 programmed automata is not very useful and needs to be abandoned in the context of
1589 social systems. Also the selection process can hardly be based on payoffs reflecting the
1590 number of offspring. Rather behavioral pattern like imitation, adaptation, experimenting
1591 combined with learning should determine the evolution. The prototype of learning
1592 models in a social context is fictitious play. It was independently introduced in 1951 by

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1593 GEORGE W. BROWN and by JULIA ROBINSON as an algorithm for computing


1594 Nash equilibria in certain classes of games. Each player in a certain round of a game
1595 uses his observations of the frequencies of the other players’ strategy choices in the past
1596 as the basis for estimating their present mixed strategies. Then he chooses a best reply to
1597 these. A game has the fictitious play property if every limit point of a sequence of
1598 strategy profiles generated by fictitious play is a Nash equilibrium of that game.
1599 Convergence behavior of fictitious play has been analyzed in increasing generality by
1600 ROBINSON, MIYASAWA, SHAPLEY and ROSENMÜLLER. ROBINSON proved in
1601 1951 that any finite zero-sum two-person game has the fictitious play property. The
1602 same was shown to hold true for every non-degenerate 2 × 2 -bimatrix game in 1961 by
1603 MIYASAWA. In 1971 ROSENMÜLLER derived this result, as well as SHAPLEY’S
1604 non-convergence result from 1964 for a class of 3 × 3 -bimatrix games, from a more
1605 fundamental analysis of convergence behavior of fictitious play based on eigenvalue
1606 considerations. More recent results are due to JORDAN and to MONDERER and
1607 SHAPLEY in 1991 and 1996, respectively. JORDAN proved that for sufficiently
1608 dispersed priors fictitious play does converge to a Nash equilibrium. MONDERER and
1609 SHAPLEY establish the fictitious play property for weighted potential games.
1610
1611 There are other classes of games in which Nash equilibrium appears to be the natural
1612 solution, where, however, fictitious play does not single it out. FOSTER and YOUNG
1613 provided a special coordination game, which they called Merry-Go-Round Game where
1614 players follow fictitious play in cyclic patters and never coordinate.
1615
1616 Why do players not learn that they are trapped in Cycles? This question leads to the idea
1617 of adaptive learning where in fact players are less rational and less informed than in
1618 fictitious play. PEYTON YOUNG modeled a context of short-sighted players with
1619 limited information and reasoning ability and a probabilistic error rate independent
1620 across the players as a Markov process which he termed adaptive play. The basic
1621 ingredients are bounded by rational reactions to predictions of other players’ past
1622 behavior that are estimated from limited data and stochastically distorted. In their model
1623 of adaptive learning Foster and Young formulated an alternative stronger solution
1624 concept termed stochastic stability. In contrast to the ESS a stochastically stable
1625 equilibrium is even robust against persistent random perturbations. In his book
1626 Individual Strategy and Social Structure Young develops an extensive theory of
1627 adaptive learning and relates stochastic stability to other well known concepts of game
1628 theory like, for instance, risk-dominance, focal points or maximin contracts. Among
1629 several interesting results relating adaptive learning and stochastic stability to the
1630 theories of bargaining and of contract his characterizations of the Nash and of the Kalai-
1631 Smorodinsky solutions as limits of stochastically stable payoff vectors are particularly
1632 remarkable. They demonstrate that adaptive behavior of people can come arbitrarily
1633 close to results which are predicted by axiomatic solutions from standard game theory
1634 based on rationality assumptions on the players.
1635
1636 Another example for boundedly rational behavior leading to an established rationality
1637 based outcome is due to VEGA-REDONDO. He shows that in oligopolistic markets
1638 with n identical firms involved in Cournot type quantity setting competition a learning
1639 process combining elements of experimenting and of imitating has as its long-run
1640 outcome the unique symmetric Walrasian equilibrium.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1641
1642 These insights that repeated interaction of boundedly rational individuals may produce
1643 outcomes compatible with standard rationality based game theory support well the
1644 optimistic conclusions VERNON SMITH was drawing from various experiments as
1645 carried out in Section 9 on Evolution and Learning in Games.
1646
1647 9. Experimental Games
1648
1649 9.1. Introduction
1650
1651 It should have become quite clear by now that game theory is not the unified general
1652 theory of human interaction in groups of society. It is more a collection of concepts,
1653 methods and modeling devices whose use and power vary and depend on the specific
1654 problem and context. Common to all these game theoretic models except evolutionary
1655 game theory is the fundamental assumption of perfectly rational players. These have
1656 well defined goals, an unlimited power of learning, reasoning, understanding and
1657 computing and are assumed to use them in order to determine their decisions. Moreover
1658 in many situations heroic assumptions on information processing abilities and
1659 knowledge of players are made up to the extreme of common knowledge that was
1660 discussed earlier. Experimental games roughly are concerned with the question to what
1661 extent game theory really describes and predicts human behavior. Starting with the
1662 degenerate but nevertheless highly complex and challenging theory of one person
1663 games or decision theory, where axiomatized models of consistent individual behavior
1664 like expected utility theory are tested, almost all aspects of game theoretic modeling
1665 have been put on trial in experiments. The design of experiments is an extremely
1666 sensible and ambitious business. The experimenter has to make sure that the players
1667 following precisely his rules actually play the game he has in mind. Moreover he must
1668 be able on the basis of the results to cleanly separate different possible causes and
1669 explanations. However ingenious several experimental devices may be, the results
1670 cannot question game theory as a normative interpersonal decision theory. They only
1671 can possibly cast doubt on or even discredit it as a descriptive theory with predictive
1672 power. In the following sections some experimental results are briefly presented and
1673 discussed.
1674
1675 9.2 Repeated Prisoners’ Dilemma
1676
1677 Tradition has it that the first attempt to test game theory was a hundred fold repetition of
1678 the Prisoner’s Dilemma game played between the economists ALCHIAN and
1679 WILLIAMS in 1950. In this game two players have two strategies each, namely
1680 observing or breaking a contract. If both break or both observe they receive ten units or
1681 one unit of payoff each, respectively. If they choose different strategies the player who
1682 breaks receives twenty units, the other one zero. To break is for both players a dominant
1683 strategy and constitutes the unique Nash equilibrium. Hence both players are
1684 condemned by rationally playing the equilibrium strategy to forgo the Pareto
1685 dominating and even Pareto efficient payoff vector (10,10).
1686
1687 Although mutual defection represents the unique Nash equilibrium of the one shot
1688 normal form game the series of hundred plays showed mutual cooperation in 60

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1689 instances as opposed to only 14 mutual defections. Similar behavior was experimentally
1690 established by McKELVEY and PALFREY about forty years later. According to the
1691 Folk Theorem this behavior would be consistent with equilibrium behavior in an
1692 infinitely repeated Prisoners’ Dilemma game. In the finitely repeated game, however, it
1693 contradicts rational behavior because backward induction singles out consecutive
1694 mutual defection as the unique subgame perfect equilibrium. A large experimental
1695 literature including work by RAPOPORT and CHAMMAH and by SELTEN and
1696 STÖCKER has confirmed this basic result of deviating from game theory’s rational
1697 equilibrium behavior as dictated by backward induction. More specifically, an end effect
1698 has been systematically observed where players after many stages of cooperation
1699 defected towards the end in accordance with the one shot game equilibrium. Evidence
1700 here seems to lead to a falsification of the rationality assumptions implicit in the game
1701 theoretical model (see Experimental Games).
1702
1703 9.3 Coordination Games
1704
1705 An essentially unsolved problem of non-cooperative game theory is it how to deal with
1706 multiple equilibria. A vast literature on refinements has been devoted to the attempt by
1707 excluding “implausible” equilibria to restore uniqueness. As it turned out all established
1708 refined equilibrium concepts, apart from the extreme dominance equilibrium, are open
1709 to non-uniqueness. This fact requires some kind of coordination between players and on
1710 the other hand opens the door for building up reputation in repetitions of the game.
1711 Experiments due to SCHELLING led to the insight that players have a surprising ability
1712 to coordinate their behavior and to base their selection of an equilibrium on shared
1713 senses of salience or prominence once that are in some joint social context. Driving on
1714 the right hand side of the street is the focal point equilibrium in most parts of the world,
1715 yet not everywhere.
1716
1717 Adding in a symmetric 2 × 2 -bimatrix game with two equilibria in the upper left an the
1718 lower right corners for both players a third dominated strategy should have no effect on
1719 the frequency by which one or the other equilibrium is played, nevertheless it has been
1720 shown in experiments that the specification of the opponents’ payoffs for the added
1721 third dominated strategies may have dramatic consequences for the relative frequency
1722 of the choice of the first equilibrium. High amounts which could be gained there seem
1723 to have an impact.
1724
1725 Consider the purely symmetric game where the players’ strategy sets are {R1 , R 2 } fro
1726 the row player 1 and {C1 , C 2 } for the column player 2 and the payoff functions are
1727 given by F i ( R j , C k ) = δ jk , i, j , k = 1, 2 .
1728
1729 Despite game theory’s inability to discriminate between the two equilibria ( R1 , C1 ) and
1730 ( R 2 , C 2 ) experiments showed differences depending on the cultural background of the
1731 players.
1732

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1733 While a majority played ( R1 , C1 ) , a group consisting of Chinese, Japanese and Koreans
1734 was more attracted by ( R 2 , C 2 ) . The training to read rows from left to right might have
1735 created a salience for the majority when confronted with the game in bi-matrix form,
1736 while for the minority group ( R 2 , C 2 ) might have been more prominent. Experimenters’
1737 explanation here would be that some characteristics of the players not caught by a
1738 rational-choice based game model could be influential for the outcome.
1739
1740 Another interesting insight from experiments comes from the observation that in games
1741 with several Pareto ordered Nash equilibria frequently not the Pareto optimal one was
1742 played. This hints to a serious coordination failure. A remarkable observation in this
1743 context has been made in series of experiments by VAN HUYCK, BATALLIO and
1744 BEIL at Texas A & M University. In finite repetitions of some coordination game with
1745 multiple Pareto ranked equilibria observed behavior never came close to the pay off
1746 dominant, i.e. Pareto optimal equilibrium. But when the rights to participate in the game
1747 were auctioned before and the successful bids were commonly known among the
1748 players behavior always closely approximated the efficient equilibrium (see
1749 Experimental Games).
1750
1751 9.4. Bargaining Games
1752
1753 The most popular approaches to bargaining games are the axiomatic cooperative one
1754 due to Nash and the non-cooperative alternate offer model due to RUBINSTEIN. Both
1755 approaches have been exposed to experimental tests. One of the fundamental difficulties
1756 in testing Nash’s purely welfaristic model, in which only players’ utilities determine the
1757 solution, lies in the fact that knowledge of the players’ utilities is required. But these are
1758 not observable. ROTH and MALOUF devised a clever method to overcome this
1759 problem by arranging bargaining over distributions of lotteries. In the experiment the
1760 distribution of lottery tickets determines with which probabilities a random mechanism
1761 attributes high or low prizes to players.
1762
1763 When the row player 1 gets 80 tickets and the column player 2 gets 20, the consequence
1764 is that 1 has an 80% chance of winning his high prize and a 20% chance of winning his
1765 low prize. Important is the fact that the high and low prizes for the two players may be
1766 different. As in the Nash model the solution is independent of players’ specific cardinal
1767 utility representations, the experimenter may set each players’ high prize equal to one
1768 unit and its low prize equal to zero. Now, as players bargaining over probabilities of
1769 receiving rather than over utilities, the experimenters do not need to know how much
1770 that is valued by the different players. Formally, players negotiate over how to divide
1771 the expected gain of one unit. The Nash solution selects the allocation that maximizes
1772 the product of the expected gains, the so called Nash Product. This would require an
1773 equal division of lottery tickets.
1774
1775 According to ROTH and MALOUF the results in experiments centered around two
1776 distributions when players knew their opponents’ prizes.
1777

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1778 One is the Nash solution, i.e. equal number of lottery tickets for both players, the other
1779 one is the distribution that generated equal expected gains. So experiments provide
1780 partial support for the Nash solution.
1781
1782 The experiments testing Rubinstein’s subgame perfect equilibrium predominate in the
1783 more recent literature. Truncated versions of Rubinstein’s game were played by GÜTH,
1784 SCHMITTBERGER and SCHWARZ where, if no agreement was reached in round
1785 k ,for k = 1 the game became an ultimatum game, in which both players lose unless an
1786 offer is accepted. Although subgame perfect equilibria prescribe a solution where the
1787 offering player gets close to everything, the average offer in experiments was 33%!
1788 These results induced the authors to doubt that the subgame perfect equilibrium has
1789 predictive power in bargaining games.
1790
1791 Further experiments confirmed the impression that experimental results were in conflict
1792 with the predictions that money is a good proxy for utility payoff and that bargainers
1793 simply want to maximize their own incomes. They tend to indicate falsification of the
1794 joint hypothesis of expected payoff maximization and of backward induction.
1795
1796 Several experiments conducted by Albers in the Institute of Mathematical Economies in
1797 Bielefeld led him to the conclusion that prominence of numbers used as payoffs
1798 systematically influence players decisions and outcomes in games (see Experimental
1799 Games).
1800
1801 9.5. Optimistic Conclusion
1802
1803 VERNON SMITH who was running experiments in one of the world’s largest
1804 experimental laboratories at the University of Arizona in Tucson comes after an
1805 uncounted number of experiments in bilateral bargaining games, oligopoly games,
1806 various sealed bid auctions and continuous double auctions to a surprisingly optimistic
1807 and puzzling result.
1808
1809 In situations of complete and common information about other players’ preferences,
1810 which is the standard framework in which Nash equilibrium is considered the adequate
1811 solution concept for non-cooperative games, it has only scarce empirical confirmation.
1812
1813 However, in the more realistic context of repeated games with private incomplete
1814 information non-cooperative equilibria (and similarly Walrasian equilibrium) have
1815 according to experiments a high predictive power. Similarly, the experiments of ROTH
1816 and MALOUF on cooperative bargaining give a strong support to the Nash solution in
1817 bargaining with private information. Smith’s irritating message is it that rational
1818 equilibrium and axiomatic bargaining outcomes lack experimental support in situations
1819 where they can be theoretically justified and have high predictive power where this is
1820 not the case. According to VERNON SMITH “the theoretical problem that an
1821 equilibrium of a model might be approximated without agent knowledge of
1822 understanding of the model has important implications for the concept of common
1823 knowledge that allegedly underlines contemporary game theory.”
1824
1825

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1826 10. Concluding Remarks


1827
1828 In his book Choice and consequence THOMAS SCHELLING has a chapter termed
1829 what is Game Theory. There he finds that in contrast to economic, statistical and
1830 decision theory “there is no accepted name for whatever the field is of which “game
1831 theory” refers to the theoretical frontiers” like economics, statistics or decisions.
1832 Despite several earlier developments described in the Introduction the appearance of
1833 von Neumann’s and Morgenstern’s book can be seen as the birth of Game Theory as a
1834 discipline. The authors, a mathematician and an economist, reflect perfectly the parent
1835 fields of the new discipline. 56 years later, when the recently founded Game Theory
1836 Society organized its first world congress “Game Theory 2000” in the Spanish Bilbao
1837 more than 700 researchers from several fields were a living proof of the enormous
1838 impact and fast growth of game theory as a field that had penetrated social sciences and
1839 found applications in biology, engineering and systems design. The treatment of game
1840 theory in the EOLSS in eight articles and this accordingly structured topic level survey
1841 does provide the fundamentals of the field and stresses some specific developments that
1842 were particularly important for the development of the discipline. It also hints to
1843 interesting new developments in the analysis of and experiments on evolutionary
1844 processes and learning in social systems that may turn out of value for future global
1845 sustainable development.
1846
1847 In the EOLSS Game Theory justifiably has been represented as one of the topics of the
1848 theme Operations Research (see Operations Research).
1849
1850 It should not be hidden that many researchers meanwhile would insist on a reverse
1851 relation, namely operations research as part of game theory. This point of view is
1852 consistent not only because of the variety of mathematical techniques employed but in
1853 particular under the aspect that any optimization, decision or planning problem can
1854 either be seen as an instance of a one-person game or, because of involved interpersonal
1855 interests and conflicts, falls in a natural way into the realm of interpersonal decision
1856 theory.
1857
1858 For instance a problem of multicriteria decision making, a field quite popular in
1859 Operations Research, can be modeled as a game where the various criteria represent
1860 interests of different players (see multicriteria decision making).
1861
1862 There are numerous subdivisions and fields of applications in game theory that are not
1863 even mentioned in the present treatment. The three volumes of the Handbook of Game
1864 Theory edited by ROBERT AUMANN and SERGIU HART carry the subtitle “with
1865 Economic Applications”. Despite this focus the Handbook contains some sixty articles
1866 trying to cover game theory.
1867
1868 Chapter on Psychology, Inspection Games, Differential Games, Power and Stability in
1869 Politics, Game-Theoretic Aspects of Computing, Moral Hazard, Patent Licensing,
1870 Strategic Analysis of Aucitons and Search reflect the enormous variety of a vastly
1871 developing discipline.
1872
1873

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1874 Glossary
1875
Axiomatic Approach: The attempt to uniquely characterize a solution concept on
a class of games by a set of mathematical axioms the
interpretation of which is considered self-evident.
Bargaining Problem: An NTU-game where the only admissible coalitions are
singletons and the grand coalition.
Bargaining Sets: A class of solution concepts for NTU-games containing
only payoff vectors that are immune against admissible
alternatives proposed by coalitions as objections.
Coalitional Function: A mapping that assigns utilities or outcomes to coalitions
about which these coalitions can enter binding agreements.
Common Knowledge: The paradigm of a situation in which the players can fully
observe their opponents knowledge, their observations
about ones own knowledge etc. on arbitrary levels of
hierarchy.
Cooperative Solution: A mapping that assigns utilities to players in view of their
bargaining on coalitional power as described by the
coalitional function.
Cost Sharing: In Game Theoretical terms the application of solution
concepts of cooperative game theory towards the fair or
equal distribution of disutilities among cost generating
divisions or sections of a firm.
Equilibrium: A stable situation described in terms of strategies, payoff
vectors, or consumption decisions from which to deviate is
not profitable for a player or a group of players.
Economy: a specification of data thought to completely describe an
economic system thereby representing the exogenous
restrictions for economic agents’ interaction in markets.
Equivalence Principle: A class of results characterizing Walrasian (competitive)
equilibria of large economies as (elements of) game
theoretic solutions.
Evolution Stability: A property of strategies in a symmetric bimatrix game
assuring that a strategy is a best response to itself that
moreover beats other best responses as a response to them.
Extensive Form: The representation of a game describing a stochastic
process susceptible to the players’ successive actions and
resulting in intermediate and final payoffs to them.
Fictitious Play: The prototype of learning models in a social context: In a
sequence of game repetitions each player chooses a strategy
that is an optimal response to an estimated mixed strategy
combination of the other players derived from observation
of their past behavior.
Game: The paradigm of human competitive and cooperative
interaction calling for decisions (under uncertainty) of the
players involved and resulting in payoffs or outcomes for
these players.
Game Form: A list of strategy sets for n players, interpreted as sets of

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

admissible messages, together with an outcome function


associating with any strategy profile an outcome interpreted
as a social state.
Implementation: A simultaneous realization of a social choice rule by non-
cooperative equilibria for a whole class of games generated
by a given game form together with a family of preference
profiles.
Incentive Compatibility: A game form (or mechanism) is incentive compatible if it is
direct, i.e. strategy sets are the sets of possible preference
or types, and truth telling is the unique equilibrium
Mechanism: A device processing signals of the players permitting to
correlate their announcements and resulting in cooperative
decision, thereby possibly resulting in strategic behavior of
players anticipating the structure of this device. Sometimes
the term is used as a synonym for “game form”.
Nash Program: A research agenda in game theory, initiated by John Nash
that tries to characterize certain payoff vectors for players
alternatively via axiomatic cooperative solutions and via
non-cooperative equilibria, thereby making both
approaches more transparent.
Normal Form: The representation of a game describing the strategic
alternatives available to the players and the payoff
functions defined on these alternatives. No binding
agreements available.
Nucleoli: A class of (nonlinear) solution concepts based on a fair or
equalizing assessment of complaints to be lodged by
coalitions against proposals for utility assignments.
Pareto Efficiency: The most popular axiomatic property of social outcomes
that makes any individual improvement impossible unless
at least some other persons are worse off.
Revelation Principle: A group of results asserting that any social choice rule that
can be implemented in Nash or Bayesian Nash equilibrium
allows for an alternative truthful implementation in a direct
game.
Social Choice Rule: A mapping associating with any admissible profile of
preferences on a set of social states a subset of this set
interpreted as socially desired.
Solution: A mapping associating with any member of a class of
games some set of feasible payoff vectors for the players.
Strategy: A complete plan of decisions for a player to be
implemented contingent to all possible states of nature,
characteristics of the opponents, their knowledge and
intentions, at the same time respecting the possible choice
of strategies of the opponents.
TU-NTU: Transferable versus nontransferable utility. Concepts
attached to cooperative games according to whether a
universal medium of transfer (“money”) is thought to be
available or not.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

Type: A state of nature describing the characteristics of a player,


randomly generated and in general (privately) observable to
this player, thus allowing him to infer conditional
probabilities concerning the other players types.
Values: In Game Theoretical context a class of (linear) solution
concepts that reflect a priory assessments or expectations
of gains, mainly axiomatically justified.
Welfarism: An ideological position in welfare economics and social
choice theory claiming that social welfare depends in a
society only on the utilities of their individual members.
Axiomatic bargaining models in the tradition of Nash are
welfaristic. In contrast, models with an underlying outcome
space representing a specific economic or social context are
not.
1876
1877 Bibliography
1878
1879 [1] R.J. Aumann, (1967)A survey of cooperative games without side-payments, Essays in Mathematical
1880 Economics in Honor of Oskar Morgenstern (M.Shubik, ed.), Princeton University Press. [Describes the
1881 state of the art at the time but actually constitutes the embarkation pint for two decades of research in
1882 NTU-games, pp. 3-27.]
1883 [2] R.J. Aumann, (1983) On the non-transferable utility value, RM 55, Center for Research in
1884 Mathematical Economics and Game Theory, The Hebrew University, Jerusalem. [This paper completes
1885 the axiomatization of the Shapley value to NTU games].
1886 [3] R.J. Aumann and S. Hart (eds.), (1992, 1994, 2001)Handbook of game theory with economic
1887 applications, vol. 1-3, Elsevier Science Publishers. [These three volumes provide a thorough treatment of
1888 Game Theory and its applications and represent the state of the art. ]
1889 [4] R.J. Aumann and L.S. Shapley, (1974)Values of nonatomic games, Princeton University Press,
1890 Princeton. [Establishes Game Theory for a continuum or “ocean” of players.]
1891 [5] M.A. Dimand and R. W. Dimand, (1996)The history of game theory, Routledge, London and New
1892 York. [This book describes the historical development of Game theory.]
1893 [6] D. Fudenberg and D. Levine, (1998)The theory of learning in games, MIT Press, Cambridge. [A
1894 comprehensive treatment of evolutionary game theory and learning in games.]
1895 [7] R.H. Gertner and R. C. Picker, (1994) Game theory and the law, Harvard University Press,
1896 Cambridge. [A thorough discussion of the application of game theory to law and legal problems.]
1897 [8] D. Hurwicz, L. Schmeidler and H. Sonnenschein (eds.), (1985) Social goals and social organization,
1898 Cambridge University Press, Cambridge. [This volume contains extensive treatments of incentive
1899 problems and different areas of implementation and mechanism theory].
1900 [9] J. Kagel and A. Roth (eds.), (1995) The handbook of experimental economics, Princeton University
1901 Press, Princeton. [The Handbook contains several chapters covering experiments on games and
1902 decisions.]
1903 [10] J.F. Mertens and S. Sorin (eds.), (1994) Game-theoretic methods in general equilibrium analysis,
1904 Kluwer Academic Publishers, Dordrecht. [This volume contains a variety of articles surveying the role of
1905 game theoretic solutions for competitive markets and discussing the equivalence principle. ]
1906 [11] M. Osborne and A. Rubinstein, (1994) A course in game theory, The MIT Press, Cambridge, Mass.
1907 London, England. [A most comprehensive textbook.]
1908 [12] J. Rosenmüller, (2000) Game Theory: Stochastics, information, strategies and cooperation, Theory
1909 Decision Library, C, vol, 25, Kluwer Academic Publishers Boston, Dordrecht, London.

©Encyclopedia of Life Support Systems (EOLSS)


OPTIMIZATION AND OPERATIONS RESEARCH – Game Theory - Joachim Rosenmuller and Walter Trockel

1910 [13] L. Samuelson, (1997) Evolutionary games and equilibrium selections, MIT Press, Cambridge. [A
1911 careful analysis of evolutionary game theory and its use for equilibrium selection in games.]
1912 [14] L.S. Shapley, (1969) Utility comparisons and the theory of games, La Decision: Aggregation et
1913 Dynamique des ordres de Preference (Paris), Editions du Centre National de la Recherche Scientifique ,
1914 pp. 251-263. . [The basic paper extending the Shapley value to NTU-games]
1915 [15] W. Vickrey, (1962) Auctions and bidding games, Recent Advances in Games Theory, Princeton
1916 Conference Series, vol. 29, Princeton University, , pp. 15-27. [This is the basic article establishing the
1917 impact of Game Theory on the Theory o f Auctions and Bidding]
1918 [16] J. von Neumann and O. Morgenstern, (1944,1947,1953) Theory of games and economic behavior,
1919 Princeton University Press, Princeton. [The basic volume containing the foundation of Game Theory. ]
1920
1921 Biographical Sketches
1922
1923 Joachim Rosenmüller is Professor of Mathematical Economics at the Institute of Mathematical
1924 Economics at the University of Bielefeld, Germany. He received his diploma in Mathematics at the
1925 University of Göttingen, his doctoral degree in Mathematics at the University of Erlangen, and his second
1926 doctoral degree (habilitation) in Mathematics at the same university. He has done research in Cooperative
1927 and Non Cooperative Game Theory, Operations Research, and General Equilibrium Theory. Professor
1928 Rosenmüller has more than sixty publications including two books, two lecture notes and articles in
1929 Scientific Journals such as International Journal of Game Theory, Games and Economic Behavior,
1930 Journal of Mathematical Economics, Archiv der Mathematik, Mathematics of Operations Research,
1931 S.I.A.M. Journal of Applied Mathematics, Advances in Mathematics, Discrete Mathematics,
1932 Mathematical Methods of Operations Research (ZOR), Mathematical Social Sciences, Journal of
1933 Economics, Discrete Applied Mathematics, Applied Mathematics and Computations, Theory and
1934 Decision. He is on the editorial board of International Journal of Game Theory (for six years as editor-in-
1935 chief), Games and Economic Behavior, Mathematics of Operations Research, Review of Economic
1936 Design, Mathematical Methods of Operations Research, The Theory and Decision Library (Series C). He
1937 is a member of the Game Theory Society, the Deutsche Mathematiker-Vereinigung, Gesellschaft für
1938 angewandte Mathematik und Mechanik.
1939
1940 Walter Trockel is Professor of Mathematical Economics at the Institute of Mathematical Economics at
1941 Bielefeld University, Germany. He received a diploma in Mathematics and his doctoral degree in
1942 Economics at Bonn University, where he also got his Habilitation. He has done research in general
1943 equilibrium theory, demand analysis, decision theory and game theory. He has more than fifty
1944 publications including two books and articles in scientific journals such as Econometrica, Economic
1945 Theory, Journal of Economic Theory, Journal of Mathematical Economics, International Game Theory
1946 Review, Social Choice and Welfare, Optimization and Journal of Mathematical Analysis and
1947 Applications. Dr. Trockel is managing editor of the Springer Lecture Notes in Economics and
1948 Mathematical Systems and serves in the editorial boards of Economic Theory, Journal of Mathematical
1949 Economics, Review of Economic Design and International Game Theory Review. He is a member of the
1950 Econometric Society, the Game Theory Society, the Society for the Advancement of Economic Theory,
1951 the Society of Economic Design, the Society of Social Choice and Welfare, the Deutsche Mathematiker-
1952 Vereinigung, the European Economic Society, the Gesellschaft for Operations Research and of the
1953 Theoretischer Ausschuss of the Verein für Socialpolitik.
1954

©Encyclopedia of Life Support Systems (EOLSS)

View publication stats

You might also like