You are on page 1of 32

Ann Oper Res

DOI 10.1007/s10479-017-2453-z

ORIGINAL-SURVEY OR EXPOSITION

Basic theoretical foundations and insights on bilevel


models and their applications to power systems

David Pozo1 · Enzo Sauma2 · Javier Contreras3

© Springer Science+Business Media New York 2017

Abstract Decision making in the operation and planning of power systems is, in general,
economically driven, especially in deregulated markets. To better understand the partici-
pants’ behavior in power markets, it is necessary to include concepts of microeconomics and
operations research in the analysis of power systems. Particularly, game theory equilibrium
models have played an important role in shaping participants’ behavior and their interactions.
In recent years, bilevel games and their applications to power systems have received growing
attention. Bilevel optimization models, Mathematical Program with Equilibrium Constraints
and Equilibrium Problem with Equilibrium Constraints are examples of bilevel games. This
paper provides an overview of the full range of formulations of non-cooperative bilevel
games. Our aim is to present, in an unified manner, the theoretical foundations, classification
and main techniques for solving bilevel games and their applications to power systems.

Keywords Game theory · Operation research in energy · Bilevel games · Mathematical


Program with Equilibrium Constraints · Equilibrium Problem with Equilibrium Constraints

This research has been partially supported by the CONICYT, FONDECYT/Regular 1161112 Grant and by
the Programa CSF-PAJT, Brazil, under Grant 88887.064092/2014-00.

B David Pozo
davidpozocamara@gmail.com
Enzo Sauma
esauma@ing.puc.cl
Javier Contreras
Javier.Contreras@uclm.es
1 Department of Electrical Engineering, Pontifical Catholic University of Rio de Janeiro,
Rio de Janeiro, Brazil
2 Pontificia Universidad Católica de Chile, Santiago, Chile
3 University of Castilla–La Mancha, Ciudad Real, Spain

123
Ann Oper Res

1 Introduction

Operations research has become omnipresent in a large variety of disciplines, like eco-
nomics, politics and engineering. In particular, operations research has been widely used
in power system operations and planning, from the perspective of several participants as
generation companies, transmission owners, system operators and regulators among oth-
ers. Many optimization models have been proposed facing the individual interest of each
power market participant. When the outcome of an individual participant depends on the
other participants’ decisions, we talk about equilibria and game-based optimization prob-
lems. Game-based optimization models have been increasing since the early eighties, when
countries began to deregulate their electricity markets with a clear tendency towards splitting
ownership of all the activities to foster competition. In this context, decision makers—also
called market players—behave selfishly by independently maximizing their own profits or
minimizing their costs (Fudenberg and Tirole 1991). Nash equilibrium (Nash 1950) is a
common representation of such a simultaneous optimization problems providing meaningful
models for solving conflicts among interacting decision makers. In recent years, there has
been a growing interest in bilevel game approaches to model many operational and planning
problems in power systems, among others. Bilevel games, where participants make deci-
sions in sequential manner, constitute an step forward from the Nash equilibrium or one-shot
games. Although one of the first bilevel games was proposed by von Stackelberg (1934), they
have not been firmly established so far. This paper provides an overview of the full range of
formulations for non-cooperative bilevel games. Our aim is to present, in an unified manner,
the theoretical foundations, classification and main techniques for solving bilevel games.
Power systems are divided into four fundamental elements: generation, transport, distri-
bution and consumption (Kirschen and Strbac 2004). Roughly speaking, energy flows from
generation to consumption and it is traded in different markets. However, power markets
entail peculiarities distinct from other commodity markets. We can highlight the next five
features among others. Firstly, most of the energy provided at each time has to be generated,
delivered and consumed at each moment with perfect balance due to the absence of large
energy storage systems. This means that generators have to produce the energy required by
consumers at any moment considering the thermal limits of the network and other stability
constraints to avoid the power system collapse. Secondly, the formulations of load flows
follow physical laws, named Kirchhoff’s laws, which make electrical energy inextricably
linked with a physical system. In other words, energy produced by a generator cannot be
delivered to a specific consumer making indistinguishable the source of energy consumed.
Transshipment (“pipes and bubbles”) formulations of load flows do not address the effect of
Kirchhoffs voltage law, i.e., the voltage drop around any loop in a network must be zero. This
represents a big difference with respect to other network flow representation, such as ground
transportation, where vehicle flow can be redirected throughout roads (“pipes”) whilst power
flow cannot. Thirdly, there are few generation firms that cope with a very large share of the
energy generated. Fourthly, energy demand is quasi inelastic and relatively predictable on
daily and weekly bases. Fifthly, the network allowing the flow of energy from generators to
consumers is prone to congestion, creating market inefficiencies. This corroborates that we
systematically account for these particularities concerning power systems in order to better
represent the interactions among participants.
This paper introduces and defines some basic game theory concepts in non-cooperative
one- and two-level games and their applications to power systems. First we provide an intro-
duction and general insights concerning the use of equilibrium models in power systems

123
Ann Oper Res

under the restructured environment in Sect. 2. Afterwards, in Sect. 3, we give some prelim-
inary game theory definitions and notation used throughout this paper. Then, we describe
one-level games and their mathematical formulations in Sect. 4. In Sect. 5, we formulate
bilevel games ranging from the single-leader-single-follower game to the multiple-leader-
multiple-follower game. In Sect. 6 we describe the methodologies commonly used for solving
bilevel games. Finally, future challenges in bilevel games are sumarized in Sect. 7, and a brief
summary and conclusion is presented in Sect. 8.

2 Equilibrium models in deregulated power systems

In 1950 John F. Nash provided the mathematical framework for finding the equilibrium in an
n-person non-cooperative game, which is named after him as Nash equilibrium (Nash 1950).
Hundreds of publications have appeared for developing new concepts of equilibrium, new
algorithms for their resolution and new applications in almost all areas of knowledge. Game
Theory has flourished as a new branch of knowledge. Game theory captures the strategic
behavior of individual players, where an individual player decision and outcomes depend on
the choice of the other players (Fudenberg and Tirole 1991).
The application of game theory to power systems has answered new questions that have
arisen after the deregulation process. Searching for possible market equilibria is a desirable
objective both for market participants and market regulators. For participants, because an
equilibrium gives insights on possible strategies that rivals might adopt (the knowledge of
equilibria represents a valuable tool for electric companies to implement their strategies)
and for market regulators, because market power monitoring and corrective measures are
possible.
Due to the oligopolistic nature of power markets, sometimes they do not show perfect
competition and equilibrium models are desirable for analyzing the market outcomes and the
participants’ behavior. In this context, imperfect competition means that one or more market
participants can affect electricity market prices through its decisions, and is aware of this.
A good overview about electricity market modeling applications and equilibrium models in
particular can be found in Ventosa et al. (2005).
When the participants make decisions simultaneously (one-shot game) the market Nash
equilibrium can be classified as:
– Cournot equilibrium It is one of the major techniques used by power systems researchers
to study the market and the participants’ behavior (e.g. Hobbs 2002; Contreras et al. 2004).
In the Cournot equilibrium the participants choose the output quantities to submit to the
electricity market by maximizing their individual profits and assuming the competitors
do not change their outputs as a function of their competitors’ decisions. In Hobbs (2002)
two Cournot models are formulated as mixed linear complementary problems including a
DC network representation. The first one is proposed for bilateral contracts and the second
one for a pool-based market. Contreras et al. (2004) propose another model similar to
Hobbs (2002), which search for the equilibrium using a relaxation algorithm based on the
Nikaido-Isoda function instead of the KKT conditions used by Hobbs (2002). Neuhoff
et al. (2005) have investigated three market equilibrium models in open- and close-
loop frameworks, while also considering transmission constraints. They have identified
challenges to replicate results in realistic power systems due to the high sensitivity to
details about market designs in Cournot models.

123
Ann Oper Res

– Bertrand equilibrium The participants use prices as strategic variables instead of quan-
tities. When there are no capacity or transmission constraints this model is equivalent
to perfect competition (see David and Wen 2001). This approach is not widely applied
to model electricity markets. Hobbs (1986) develop a linear model for finding the elec-
tricity market equilibrium based on price competition. Lee and Baldick (2003) compare
Bertrand equilibrium outcomes with outcomes from other equilibria, where the Nash
equilibrium is formulated for a three-player game in mixed strategies for Cournot and
Bertrand games.
– Supply function equilibrium (SFE) In this approach the participants submit their bids
in both price and quantity. Each participant needs to decide their whole supply curve
for different prices and for different quantities. Although it provides a realistic model,
it is hard to compute for large power systems. SFE outcomes are similar to the Cournot
equilibrium at peak demands, when power generation almost reaches its upper limit,
and close to the Bertrand equilibrium at off-peak demands, when power capacity is
significantly higher than demand (Smeers 1997). Linear (Baldick et al. 2004; Hobbs
et al. 2000; Day et al. 2002; Weber and Overbye 1999; Liu et al. 2004), piece-wise linear
(Baldick and Hogan 2001) and step-wise supply function (Bakirtzis et al. 2007; Barroso
et al. 2006; Ruiz et al. 2012; Pozo and Contreras 2011) models have been extensively
applied for finding equilibria in electricity markets.
Note that in all previous models, the participants maximize their profits independently
assuming that the competitors do not change their outputs as a function of the competitors’
decisions. Otherwise, each participant conjectures on the competitors’ reactions using its
belief or expectation of how its rivals will react to the change of its output. The above
equilibrium approaches are sometimes merged with the term of conjectural variation (CV)
equilibrium (García-Alcalde et al. 2002). CV in Cournot decisions is applied by Song et al.
(2003) for the generation firms’ bidding problem in the day-ahead market. Conjectured SFE
is applied by Day et al. (2002) where producers choose their supply functions for bidding
modeling how rival firms will adjust their sales in response to price changes. Also, CV is
used in a range from perfect competition to Cournot competition in (Wogrin et al. 2011b) for
the generation capacity expansion problem subject to market equilibrium constraints.
When the participants make decisions at different stages (sequential game) the market
equilibrium can be classified as:
– Stackelberg equilibrium The fundamental Stackelberg equilibrium consists in a single-
leader-single-follower game where a participant called the leader decides prior to the
decisions of the other market participant called follower. The leader maximizes its profits
taking into account the best response of the follower. Since the decision of the leader
affects the decision of the follower and vice versa, the leader takes advantage of being
the first to make a decision. Stackelberg games are appropriately modeled by bilevel
programming models and both terms have been alternatively used to refer to the same type
of game interaction. Examples of applications to power systems are the formulations of
strategic bidding problem (Weber and Overbye 1999), the generation capacity investment
problem (Garcés et al. 2009), and the analysis of the vulnerability of power systems under
deliberate attacks (Arroyo 2010).
– Multiple-leader-multiple-follower equilibrium This is a generalized version of the two-
level or bilevel games. In fact, the Stackelberg equilibrium is a particular case of a
multiple-leader-multiple-follower equilibrium. In the latter, there is more than one leader
that decides in the first stage subject to the optimal reactions of several followers and the
other leader’s decisions. After the leaders make their decisions, the followers make their

123
Ann Oper Res

decisions by maximizing their profits taking into account the other followers’ decisions.
At both levels a Nash game is formed. Sometimes, these games are called Stackelberg-
Nash games (De Wolf and Smeers 1997; Xu 2005) for the single-leader-multiple-follower
case. These models usually fit within Mathematical Program with Equilibrium Con-
straints (MPEC) or Mathematical Program with Complementary Constraints (MPCC)
model representations. For the case of a multiple-leader-multiple-follower equilibrium,
Equilibrium Problem with Equilibrium Constraints (EPEC) optimization models are ade-
quate to represent the interaction between the market participants (Hu and Ralph 2007;
Ralph and Smeers 2006; Leyffer and Munson 2010). Unfortunately, EPEC models are
hard to solve and very difficult to compute for large systems.
– Generalized hierarchical equilibrium It is a generalized version of the multiple-leader-
multiple-follower equilibrium where there are more than two stages. The requirement is
that decisions are made in a sequential manner. This means that participants who act later
in the game have additional information about the actions of other participants or states
of the world. This also means that participants who act first can often influence the game
outcome. At each stage there can be a single participant or multiple participants compos-
ing an equilibrium. Decisions at each stage are optimized according to the participants’
best response at later stages (where these decisions obviously affect later stages). These
models are less common in literature due to the difficulty for solving them. In general,
they are well represented by hierarchical optimization models. As an example, Sauma
and Oren (2006) present a three-stage model. In the first stage a transmission network
planner decides the optimal line expansion subject to the generation expansion decisions
(at the second stage) and market outcomes (at the third stage). At the second stage the
problem is stated as an EPEC where multiple generation firms optimize their capacity
expansions subject to the market equilibrium outcomes at the third stage.

Some authors have used the Cournot, Bertrand or supply function equilibrium terms
to describe hierarchical games with more accuracy. Hence, we can find terms such as
Stackelberg-Nash-Cournot equilibrium in De Wolf and Smeers (1997) and Xu (2005) to
describe a Stackelberg game where decisions are made only for quantities, or Nash-Cournot
equilibrium in de la Torre et al. (2004), Hobbs (2002) and Contreras et al. (2004) for solving
multiple-leader-multiple-follower equilibrium with Cournot decisions.

3 Preliminary game theory definitions

Most of the notation used in this paper is explained throughout the text. The symbol R stands
for the set of real numbers, R+ stands for the interval [0, ∞), and Z stands for the set of integer
numbers. Italicized letters, e.g., x, are used to denote vectors and scalars. Bold symbols are
used to denote vectors or tuples, e.g., x is used to refer to the tuple x = (x 1 , x2 , . . . , xn ),
where xi denotes the i-th component of the x-tuple. The x letter refers to the decisions
of the leaders, the y letter is related with the decisions of the followers, and λ and μ are
related with the Lagrange multipliers of the lower-level problem. In general, Greek symbols
are kept for Lagrange multipliers or dual variables and Latin letters for primal variables.
Capital letters represent functions of the upper-level problem and small letter functions refer
to the lower-level problem, e.g., Fi (·) represents the objective function of the i-th leader,
and f j (·) refers to the objective function of the j-th follower. The notation x−i or y− j
refers to the competitors’ actions for the i-th leader and j-th follower, respectively. Hence,
we have x−i = (x1 , . . . , xi−1 , xi+1 , . . . , xn ) and y− j = (y1 , . . . , y j−1 , y j+1 , . . . , ym ). The

123
Ann Oper Res

symbol ξ represents a random distribution to model uncertainty. E denotes the mathematical


expectation with respect to the distribution, ξ . ξ(ω) or, sometimes, ω, represent a particular
realization or scenario of the random distribution, ξ .
A game is a formal representation of a situation in which a number of players interact
in a setting of strategic interdependence (Fudenberg and Tirole 1991). This means that the
welfare of a player depends upon their own action and the actions of the other players in the
game. A game can be either cooperative, where the players collaborate to achieve a common
goal, or noncooperative, where they act for their own benefit. Also, a game can be either of
perfect or imperfect information, and sequential or simultaneous (the players make decisions
at the same time).
A player plays a game through actions. An action is a choice or decision that a player
makes, according to their own strategy. A strategy is a rule that tells the player which action(s)
it should take, according to its own information set at any particular stage of a game. Finally,
a payoff function expresses the utility that a player obtains given the strategy profile of all
the players.
We assume that there is a finite set of players, i = 1, . . . , n participating in a game.
Each player can have an individual strategy represented by a vector xi . The overall strategies
taken by all players are represented by the x-tuple = (x 1 , . . . , xn ). The rivals’ strategies
are represented by the x−i -tuple = (x1 , . . . , xi−1 , xi+1 , . . . , xn ) that denotes all the players
strategies except for player i. X i denotes the strategy space of player i. X i can be either
continuous or integer, a convex or non convex set where the strategies can take place. For
example, X i can be defined as the set X i = {xi ∈ R K i : h i (xi ) = 0, gi (xi ) ≤ 0}, where K i
is the number of variables, xi , controlled by player i (i.e., it is the size of vector xi ).
By u i (xi , x−i ) : X 1 × X 2 ×, . . . , ×X i ×, . . . , ×X n  → R we define the payoff function
for player i. In this paper, the payoff function is considered as a cost function or a minus
profit function. Therefore, the players are interested in minimizing their payoff functions.

4 One-level games

4.1 Nash equilibrium problem

Amongst all the definitions of equilibria, the Nash equilibrium is the most widely used. The
pure Nash equilibrium constitutes a profile of strategies such that each player’s strategy is the
best response to the other players’ strategies that are actually played. Therefore, no player has
an incentive for changing their strategy. More formally, a strategy vector xe = (x1e , . . . , xne )
is the pure Nash equilibrium of a game if (1) is satisfied for all players.

u i (xie , x−i
e
) ≤ u i (xi , x−i
e
), ∀xi ∈ X i , ∀i = 1, . . . , n (1)

Note that xe solves the game in the following sense: at xe no player can improve their indi-
vidual payoff unilaterally. In essence, each player faces an optimization problem measured
by their payoff function. The set of coupled optimization problems represents a Nash equilib-
rium problem (NEP). Another equivalent definition of Eq. (1) for the (pure) Nash equilibrium
is given by (2), where the NEP is stated as a set of coupled optimization problems.
 
xie solves, minimize u i (xi , x−i
e )
xi (2)
∀i = 1, . . . , n subject to: xi ∈ X i

123
Ann Oper Res

The NEP has been widely studied, and conclusions about their existence and uniqueness
have been drawn. In its first definition, Nash (1950) proved the existence of the solutions
through the Kakutani’s fixed point theorem when the payoff functions for each player were
assumed to be concave for each xi -tuple.

4.2 Generalized Nash equilibrium problem

If the actions available for the players depend on the decisions made by their rivals (i.e.
X i = X i (x−i )) the game is known as the generalized Nash equilibrium problem (GNEP).
This term was introduced by Harker (1991). The GNEP has a wide range of applications,
although it is more difficult to solve than the standard NEP. In this context, Harker (1991)
points out relationships between the NEP and the GNEP with variational and quasi-variational
inequalities problems, respectively. Although variational inequality theory covers a wide
range of non-linear problems, they have been extensively used to model equilibrium problems
(for further details see Facchinei et al. 2007; Pang and Fukushima 2005; Facchinei and Pang
2003). One of the pioneering works on power system economic equilibrium was proposed by
Arrow and Debreu (1954), described as GNEP. Jing-Yuan and Smeers (1999) also proposed a
power system equilibrium with Cournot generators and regulated transmission prices stated
as GNEP.
Equations (3) and (4) represent the GNEP as a system of inequalities or as a set of
optimization problems, respectively.
   
u i xie , x−i
e
≤ u i xi , x−i
e
, ∀xi ∈ X i (x−i
e
), ∀i = 1, . . . , n (3)
   
xie solves, minimize u i xi , x−i
e
xi   (4)
∀i = 1, . . . , n subject to: xi ∈ X i xe
−i

In the next example we give a graphic interpretation for the NEP and GNEP strategy
spaces for a two-player game.

Example 1 Given a two-player game, player 1 chooses amongst the strategies x 1 ∈ X 1 ⊆ R


and player 2 chooses amongst the strategies x2 ∈ X 2 ⊆ R, given the payoff functions,
u 1 (x1 , x2 ) : X 1 × X 2  → R for player 1 and u 2 (x1 , x2 ) : X 1 × X 2  → R for player 2.
The NEP for the two-player game is defined as (5).
⎧   ⎫
 e e ⎨ minimize u 1 x1 , x2e , s.t. x1 ∈ X 1 ⊆ R ⎬
x1 , x2 solves x1   (5)
⎩ minimize u 2 x1e , x2 , s.t. x2 ∈ X 2 ⊆ R ⎭
x2

In the GNEP two-player game, the set of strategies of player 1 depends on the decisions
of player 2. So player 1 can choose among the strategies x 1 ∈ X 1 (x2 ) ⊆ R, where X 1 (x2 )
represents the parameterized domain set of x1 in terms of their competitor’s decision, x2 .
Analogously, player 2 chooses among the strategies x 2 ∈ X 2 (x1 ) ⊆ R. Therefore, at the
(pure) Nash equilibrium point, (x1e , x2e ), the domain sets of strategies are defined as x1 ∈
X 1 (x2e ) ⊆ R for player 1 and x2 ∈ X 2 (x1e ) ⊆ R for player 2.
The GNEP for the two-player game is defined as a set of optimization problems (6).
⎧   ⎫
 e e ⎨ minimize u 1 x1 , x2e , s.t. x1 ∈ X 1 (x2e ) ⊆ R ⎬
x1 , x2 solves x1   (6)
⎩ minimize u 2 x1e , x2 , s.t. x2 ∈ X 2 (x1e ) ⊆ R ⎭
x2

123
Ann Oper Res

x2 x2
X1 (x2 )

X2 X2

X2 (x1 )

x1 x1
X1 X1
Fig. 1 Example of (closed and convex) sets of strategies: left for the NEP defined in (5), right for the GNEP
defined in (6)

Figure 1 shows an example of the (closed and convex) space of the strategy sets for
the two-player game in the case of solving the NEP (left-hand side) or solving the GNEP
(right-hand side).
Note that a pure Nash equilibrium must always belong to the intersection of the over-
all players’ strategic spaces. Therefore, the two-player equilibrium must belong to the set
X (x1 , x2 ) ⊆ R2 = X 1 ∩ X 2 for the NEP or X (x1 , x2 ) ⊆ R2 = X 1 (x2 ) ∩ X 2 (x1 ) for the
GNEP. This motivates the next Nash equilibrium definition.

4.3 Generalized Nash equilibrium problem with shared constraints

A GNEP with shared constraints is a special instance of GNEP (3) and (4) presented in the
previous section. In this game there exists a set of common constraints that simultaneously
restrict each player’s optimization problem. Shared constraints games were introduced by
Rosen (1965), who proved the existence and uniqueness of the equilibrium when the set
of shared constraints is closed, convex and bounded and the payoff functions satisfy diag-
onal strict concavity. Kulkarni and Shanbhag (2014) and Kulkarni (2011) claimed to find
the global pure Nash equilibrium for bilevel games with shared constraints and potential
payoff functions (Monderer and Shapley 1996) and quasi-potential functions (Kulkarni and
Shanbhag 2015).
Because the GNEP is non-convex and mathematically irregular, with no general sufficient
conditions for existence known, it does not admit tractable sufficient conditions, becoming
intractable1 in most cases. Some authors propose to convert the original problem into a GNEP
with shared constraints (Leyffer and Munson 2010; Kulkarni and Shanbhag 2013, 2014)
improving tractability and, in some cases, existence and uniqueness. However, solutions may
differ from the original GNEP as we show below. The modifications consist of including the
competitors’ constraints set for each player. This is equivalent to add the overall player’s set
of space constraints, X (x), to each optimization problem, which is defined as the intersection
n
of all the players’ strategies spaces, i.e. X (x) = i=1 X i (x−i ). The GNEP with shared
constraints is defined as a set of inequalities (7) or a set of optimization problems (8).

1 Another problem is the ambiguity on the leader’s decision because of the multiplicity of solutions (Leyffer
and Munson 2010). This issue is analyzed in Sect. 6.4.

123
Ann Oper Res

x2 x2
X1 (x2 ) X(x1 , x2 )

X2 (xe1 ) X2 (xe1 )
(xe1 , xe2 ) (xe1 , xe2 )

X2 (x1 )

x1 x1
X1 (xe2 ) X1 (xe2 )

Fig. 2 Example of (closed and convex) sets of strategies: left for the GNEP defined in (6), right for the GNEP
with shared constraints defined in (9)

     
u i xie , x−i
e
≤ u i xi , x−i
e
, ∀xi ∈ X xi , x−i
e
, ∀i = 1, . . . , n (7)
 
xie solves, minimize u i (xi , x−i
e )
xi (8)
∀i = 1, . . . , n subject to: xi ∈ X (xi , xe )
−i
We illustrate the GNEP and GNEP with shared constraints for a two-player game in
Example 2.
Example 2 Based on Example 1 for a two-player game, the equivalent GNEP with shared
constraints is defined as:
⎧     ⎫
⎨ minimize u 1 x1 , x2e , s.t. x1 ∈ X x1 , x2e ⊆ R ⎬
(x1e , x2e ) solves x1     (9)
⎩ minimize u 2 x1e , x2 , s.t. x2 ∈ X x1e , x2 ⊆ R ⎭
x2

where X (x1 , x2 ) = X 1 (x1 , x2 ) ∩ X 2 (x1 , x2 ).


Figure 2 illustrates the strategy spaces for the GNEP at the left-hand side and for the
GNEP with shared constraints at the right-hand side. For both problems a Nash equilibrium
solution must hold in the X (x1 , x2 ) space. But, as can be seen in Fig. 2, any player strategy
space is more restricted in the shared constraints case than in the general one. For example,
when player 1 chooses x1e , player 2 optimizes their payoff function over X 2 (x1e ) ⊆ R. This
set is more constrained for the shared constraints case than for the coupled one.
Because the space of strategies changes for both problems, the Nash equilibria may differ
between both game representations.
Due to the modification of the strategy space for the players in the shared constrained case,
the solutions of both problems may differ. A solution of the GNEP problem is a solution of
the GNEP with shared constraints, but not viceversa (see Kulkarni 2011). Therefore the
modified GNEP with shared constraints has at least the same Nash equilibria as the GNEP.
For further details about GNEP with shared constraints see (Kulkarni 2011; Kulkarni and
Shanbhag 2014; Leyffer and Munson 2010).
In the next example we illustrate the solution set obtained for the NEP, GNEP, and GNEP
with shared constraints.

123
Ann Oper Res

Fig. 3 NEP solution from Eq. x2


∇2 u2 (x1 , x2 ) ∇1 u1 (x1 , x2 )
(5)

NE

X2

x1
X1

Fig. 4 GNEP solution from Eq. x2


(6) ∇2 u2 (x1 , x2 ) ∇1 u1 (x1 , x2 )

X1 (x2 )

GNE
X2 (xe1 )
X2 (x1 )

x1
X1 (xe2 )

Example 3 Based on the previous two-player game from Examples 1 and 2, we define a linear
payoff function for both players. The gradients of their objective functions are ∇1 u 1 (x1 , x2 )
for player 1 and ∇2 u 2 (x1 , x2 ) for player 2. They are represented in Figs. 3, 4 and 5 with
the space of strategies for each player. The arrows point at the optimization direction of the
objective function for each player.
The NEP solution is illustrated in Fig. 3. There is a single Nash equilibrium located in one
vertex of the space of strategies. Note that for any other point of the strategies set, player 1
always chooses the highest value of x1 , given any competitor’s strategy. Similarly, player 2
always chooses the highest value of x2 , given any x1 . From the space of strategies it is easy
to deduce that there is only one Nash equilibrium.
The GNEP is illustrated with Fig. 4, where there is also a single generalized Nash equi-
librium. Notice that for a fixed strategy of player 2, x2e , player 1 chooses the highest value
of x1 ∈ X 1 (x2e ). And for player 1 decision fixed at x1e , player 2 chooses the highest value
x2 ∈ X 2 (x1e ). There is only a single point where both players minimize their payoff functions
simultaneously and they do not have better alternatives to choose. It is the GNE shown in
Fig. 4.
The GNEP with shared constraints is illustrated in Fig. 5. If player 1 is fixed at any point
of the set of GNE x1e , player 2 chooses x2 ∈ X 2 (x1e ) = X (x1e , x2 ), and the point in the thick
boundary is the one that minimizes the payoff function for player 2. Analogously, player 1
does not deviate from any fixed point of player 2 placed in the thick line. Therefore, the thick
line represents an infinite number of GNEs. GNEs have different objective values for both

123
Ann Oper Res

Fig. 5 GNEP with shared x2


constraints solutions from Eq. (9) ∇2 u2 (x1 , x2 ) ∇1 u1 (x1 , x2 )

X2 (xe1 )
Set of GNE solutions

X(x1 , x2 )

x1
X1 (xe2 )

players. The infinite number of GNEs includes the equilibrium for the general case where
the constraints are not shared.

4.4 Stochastic generalized Nash equilibrium problem

A stochastic GNEP is an extension of the GNEP including uncertainty. Among several


possible formulations, we provide one in which the payoff function is based on the expected
values and solved as a stochastic optimization problem (Birge and Louveaux 2011; Conejo
et al. 2010).
Some stochastic optimization problems include risk measures for hedging against uncer-
tainty. But, in general, those problems have many Pareto-efficient solutions. Different
attitudes about risk imply different costs (or profits). Such risk attitudes are selected by
the decision maker in terms of risk aversion. Because a risk attitude is not always evident
for the decision maker and, therefore, for their competitors, the Nash equilibrium problem
including risk hedging has a difficult economic interpretation. Some approaches for solving
stochastic Nash equilibria as robust NE problems (or worst-case) are studied by Hayashi et al.
(2005) and Xu and Zhang (2013) in terms of the expected values. Some authors as Kannan
et al. (2013) have included risk, defined as Conditional Value at Risk (CVaR) (Rockafellar
and Uryasev 2000), in the payoff function as a penalty term for each player, but risk aversion
is equally assumed for all players and it is chosen arbitrarily.
Considering risk-neutral players, the stochastic GNEP is given by:
   
xie solves, minimize E u i xi , x−i e ,ξ
xi   (10)
∀i = 1, . . . , n subject to: xi ∈ X i xe , ξ
−i

The stochastic GNEP involves some random variables represented by ξ . A sample average
method is frequently used for solving stochastic problems because they have two specific
features: the random variable is seldom fully known and, even if it is known, solving the
problem with this function makes it non tractable. Therefore, a sampling method of scenarios,
like Monte Carlo simulation, resolves these problems and the stochastic optimization problem
becomes an equivalent deterministic optimization one. Equation (11) shows the scenario-
based optimization problem formulation.
   
xie solves, minimize E u i xi , x−i
e , ξ(ω)
xi   (11)
∀i = 1, . . . , n subject to: xi ∈ X i xe , ξ(ω)
−i

123
Ann Oper Res

4.5 Finite-strategy Nash equilibrium problem

Finite-strategy games or just finite games have been widely studied in literature since J. F.
Nash formulated the equilibrium problem in (Nash 1950) with discrete decisions. In these
games, the players have a discrete set of strategies. Therefore, the set of overall actions that
the i-th player can select is xi ∈ X i = {xi1 , xi2 , . . . , xiK i }, where K i is the total number of
strategies that player i can choose.
Based on the previous definition of the NEP, the finite NEP is formulated as a set of
inequalities (12) or as a set of optimization problems (13).
     
K
u i xie , x−i
e
≤ u i xi , x−i
e
, ∀xi ∈ xi1 , . . . , xi i , ∀i = 1, . . . , n (12)
⎧   ⎫
⎨ minimize u i xi , x−ie ⎬
xie solves, xi   (13)
∀i = 1, . . . , n ⎩ subject to: xi ∈ x 1 , . . . , x i ⎭
K
i i

Due to the finite number of strategies, the payoff matrix of the game can be constructed,
where each strategy combination is evaluated at the payoff function of each player. Algorithms
for solving Nash equilibria from its payoff matrix are well known (Fudenberg and Tirole
1991). An alternative way to construct the payoff matrix is to solve the inequality system
proposed in 14 by repeating the inequality for every available strategy of each player.
   
u i xie , x−i
e
≤ u i xiki , x−i
e
, ∀ki = 1, . . . , K i , ∀i = 1, . . . , n (14)

The i-th payoff at the equilibrium [(left-hand side of (14)] must be less than or equal to the
i-th payoff for any other available strategy for the i-th player, when the rest of the players have
no incentivesto change their strategies, i.e., when
they are at the equilibrium. The inequality
n n
system has i=1 K i inequalities instead of the i=1 K i elements of the payoff matrix.

Example 4 Based on the previous two-player game from Example 1, now the strategy space
for player 1 and player 2 is discretized in 6 and 7 levels, respectively. Therefore, player 1 can
choose amongst the strategies x1 = {x11 , x12 , . . . , x16 } and player 2 can choose amongst the
strategies x2 = {x21 , x22 , . . . , x27 }.
The finite NEP for the two-player game is defined in (15). Figure 6 shows the discrete
strategy space.

⎧   ⎫

⎪ u 1 (x1e , x2e ) ≤ u 1 x11 , x2e ⎪


⎪ ... ⎪


⎪   ⎪


⎪ u 1 (x1 , x2 ) ≤ u 1 x1 , x2
e e 6 e ⎪


⎪ ⎪


⎪ ⎪


⎪   ⎪


⎪ ⎪

 e e ⎨ u 2 (x e ,
1 2 x e ) ≤ u 2 x e ,
1 2 x 1

x1 , x2 solves . . .   (15)

⎪ ⎪


⎪ u 2 (x1e , x2e ) ≤ u 2 x1e , x27 ⎪


⎪ ⎪


⎪   ⎪


⎪ ⎪


⎪ x1 ∈ x1 , x1 , x1 , x1 , x1 , x1
e 1 2 3 4 5 6 ⎪


⎪ ⎪


⎪   ⎪

⎩ e ⎭
x2 ∈ x2 , x2 , x2 , x2 , x2 , x2 , x2
1 2 3 4 5 6 7

Assume that the gradients of the payoff functions are the same as in Example 3,
∇1 u 1 (x1 , x2 ) for player 1 and ∇2 u 2 (x1 , x2 ) for player 2, and defined only for the discretized
strategies based on the original continuous case. The finite NEP solution is unique and located

123
Ann Oper Res

Fig. 6 Discrete strategy set and x2


∇2 u2 (x1 , x2 ) ∇1 u1 (x1 , x2 )
solution for the finite NEP

x72
x62 NE

x52
x42
x32
x22
x12

x11 x21 x31 x41 x51 x61 x1

at (x1e , x2e ) = (x16 , x27 ). It is represented with a bigger dot in Fig. 6. In this case, the solu-
tion from the finite NEP remains the same as in the original continuous problem. But the
NEP solution from the discretized game may be different from the solution of the original
continuous game. However, the discretized game could be more tractable for solving global
equilibria than the original computational problem, which maybe not tractable, or the payoff
functions maybe non-convex. In general, it is not possible to find global solutions for the
NEP in games with non-convex payoff functions.
The smoothness and convexity properties of the payoff function are not necessary for
finding a global solution of the proposed model (14), since the inequality system checks that
the equilibrium strategy is better than or equal to other available strategies for all finite values
of each player. Converting the finite NEP into an inequality system increases the number of
equations, but solves the problem of having non-convex and/or non-smooth payoff functions
in order to solve for a global2 NEP solution. Besides that, the inequality system can be added
as a set of constraints of a more complex hierarchical optimization problem.

4.6 Finite generalized Nash equilibrium problem with shared constraints

The finite-strategy approach proposed above has limitations for the GNEP because the set of
inequalities must be evaluated for all the finite strategies of each player when the other players
are in the equilibrium. In other words, all the discrete strategies xi ∈ {xi1 , . . . , xiK i } must be
feasible given a fixed decision vector x−i in the equilibrium, which is more restrictive than
the conventional definition of the GNEP. The latter forces feasibility only at the equilibrium
solution, i.e., xie . Therefore, a discretization of the GNEP entails a reduction of the original
feasible region and the equilibria may be different.
In the next example we clarify this fact for a GNEP with shared constraints.
Example 5 Based on the previous two-player game (Example 4), we have added a new
shared constraint over the set of strategies of both players, x1 and x2 (see Fig. 7). The
payoff functions’ gradients are ∇1 u 1 (x1 , x2 ) for player 1 and ∇2 u 2 (x1 , x2 ) for player 2, as
in the previous examples. Then, the set of solutions for the (continuous) GNEP with shared
constraints is represented by a thick line. Note that there is an infinite number of GNE.
Now, we have discretized the problem with the same levels as in the previous Example, i.e,
player 1 can choose amongst the strategies x1 = {x11 , x12 , . . . , x16 } and player 2 can choose

2 The Nash equilibrium may be referred as global Nash equilibrium or global NEP solution in order to
differentiate it from local Nash equilibrium.

123
Ann Oper Res

Fig. 7 Discretized GNE with


x2 ∇2 u2 (x1 , x2 ) ∇1 u1 (x1 , x2 )
shared constraints

x72
x62
Set of continuos GNE
x52
x42 Discretized GNE
x32
Reduced feasibe region
x22
x12

x11 x21 x31 x41 x51 x61 x1

amongst the strategies x2 = {x21 , x22 , . . . , x27 }. The equivalent finite GNEP is the same as
in the previous example (15). We assume the payoff function is known at each discrete
combination of strategies, based on the payoff gradients from the continuous problem. Then,
player 1 chooses the highest values for their own strategies, x 1 , while player 2 is interested
in choosing the highest values of their own strategies, x 2 .
Assume that, if the equilibrium decision of player 1 is x 1e = x16 , then, player 2 must evaluate
the payoff function, u 2 , at all their finite available strategies with x1e = x16 . But, for the
cases when variable x2 takes the values {x26 , x27 }, the problem becomes infeasible. Therefore,
x1e = x16 can not be solution of the problem (15). Then, the solutions of the discretized GNEP
with shared constraints are searched in a reduced feasible region represented in Fig. 7 in dark
color. This reduced feasible region constitutes an equivalent standard NEP feasible region,
in which the decision of each player is not constrained by the decisions of the other players.
The solution of the discretized GNEP is represented with a bigger dot and it differs from
the original continuous GNEP.

Discretized GNEPs have limitations using the proposed approach, as we have illustrated
in the previous example. But they can succeed with other problems, such as finding global
solutions for non-linear and non-convex payoff functions, or finding all pure Nash equilibria,
as will be described in the next subsection.

5 Bilevel games

Bilevel games are two-stage hierarchical games where players make decisions in sequence.
The simplest bilevel game is the so-called Stackelberg game (von Stackelberg 1934) or
single-leader-single-follower game, where a leader makes decisions prior to the follower’s
decisions.
As a generalization of the two-player Stackelberg game, new bilevel games have been
proposed in the game theory literature. In these generalizations, the lower and/or upper
level have more than a single player. Thus, the players at the upper level (leaders) make
decisions simultaneously competing among them and prior to the decisions of the players at
the lower level (followers). After the leaders make their decisions, the followers make their
decisions, also competing among themselves. The decisions of the followers are made taking
into consideration the leaders’ and the other followers’ decisions. Since a follower competes
against other followers, the lower-level problem forms a Nash subgame parameterized in

123
Ann Oper Res

Fig. 8 Single-leader-single-
follower game Leader

x y

Follower

terms of the leaders’ decisions. In a similar manner, in the upper-level problem, the leaders
make simultaneous decisions considering the optimal response of the followers. The leaders
compete against each other in the upper-level problem as in a Nash equilibrium subgame.
The solution of a bilevel game is also related with subgame perfect equilibrium (Fudenberg
and Tirole 1991) under the perfect information assumption. In that sense, a solution of the
whole problem corresponds to a subgame perfect equilibrium if the continuation strategy
in each subgame is also a solution (Nash equilibrium) of that subgame (e.g. lower-level
problem).
In bilevel games, leaders and followers can be either different or the same players
at both levels, but making different decisions. Depending on the number of players at
the upper or lower levels, bilevel games can be classified into four categories: single-
leader-single-follower, single-leader-multiple-follower, multiple-leader-single-follower and
multiple-leader-multiple-follower games.
In general, bilevel games can be formulated as bilevel optimization problems (Dempe
2002a; Colson et al. 2007). When there are multiple players at the lower-level problem, the
problem can be rewritten as a set of equilibrium constraints in the optimization problem
of the leader(s). In case of a single leader, the problem is stated as an MPEC optimization
problem (Dempe 2003; Luo et al. 1996). If, instead, there are several players at the upper-
level problem, it can be stated as an EPEC optimization problem (Ralph and Smeers 2006;
Hu and Ralph 2007; Zhang 2010).

5.1 Single-leader-single-follower games

A single-leader-single-follower game is the most basic instance of a bilevel optimization


problem (Dempe 2002a; Colson et al. 2007). The leader’s problem is at the upper level,
where the leader chooses a decision vector, x, first. After the leader has made its decision,
the follower chooses its decision vector, y, solving the lower-level optimization problem (see
Fig. 8).
The follower’s optimization problem is parameterized in terms of the upper-level decision,
x. Formally, the follower selects a vector, y(x), in some closed set, Y , where their objective
function, f (x, y), is minimized. The optimal set of solutions of the lower-level problem is
denoted by S (x). Then, a vector ȳ(x) belongs to the optimal set of solutions of the lower-level
problem, i.e., ȳ(x) ∈ S (x), if and only if:
 
minimize f (x, y)
ȳ(x) solves y (16)
subject to: y ∈ Y (x)

On the other hand, the leader minimizes its objective function, F(x, ȳ), in some closed set
X , taking into account the optimal response of the follower, ȳ(x) ∈ S (x). This is formally
described as follows:

123
Ann Oper Res

Fig. 9 Single-leader-multiple-
follower game Leader

x y1 x yM
y1
Follower 1 Follower M
yM

⎧ ⎫

⎨ minimize F (x, ȳ) ⎪⎬
x, ȳ
(x e , y e ) solves subject to: x ∈ X (17)

⎩ ⎪

ȳ ∈ S (x)

Here, we have used the superscript e to represent the optimal solution for the whole
problem (upper and lower level). Additionally, we can extend the conventional definition of
bilevel problems including the Lagrange multipliers from the lower-level to the upper-level
objective function and constraints. In this sense, the Lagrange multipliers solution from the
lower-level can affect the decisions of the leader (for instance, when the lower-level problem
represents a market equilibrium).
Then, the single-leader-single-follower optimal solution is obtained by solving the prob-
lem (18)–(19).
⎧ ⎫

⎪ minimize F(x, ȳ, λ̄, μ̄) ⎪


⎪ x, ȳ,λ̄,μ̄ ⎪


⎨ subject to: ⎪

(x , y , λ , μ ) solves
e e e e
G(x, ȳ, λ̄, μ̄) ≤ 0 ⎪ (18)

⎪ ⎪

⎪ ⎪

H (x, ȳ, λ̄, μ̄) = 0 ⎪

⎩ ⎭
( ȳ, λ̄, μ̄) ∈ S (x)

where ( ȳ, λ̄, μ̄) ∈ S (x) if and only if:


⎧ ⎫

⎨ minimize f (x, y) ⎪

y,λ,μ
( ȳ, λ̄, μ̄) solves subject to: g(x, y) ≤ 0, μ (19)

⎩ ⎪

h(x, y) = 0, λ

5.2 Single-leader-multiple-follower games

A single-leader-multiple-follower game is a Stackelberg problem extension with multiple


followers, where the followers are competing among themselves. Figure 9 represents the
structure of this game. In this game, a single leader makes its optimal decision, x, prior to
the decision of multiple followers, who are competing among themselves. Given the optimal
decision of the leader, x, each j-th follower makes its optimal decision, y j , parametrized by
its competitors’ optimal decisions, ȳ− j .
The single-leader-multiple-follower equilibrium solution is given by solving problem
(20)–(21). Vector (x e , ye , λe , μe ) represents the optimal values of the decisions of the leader
and the followers, as well as the Lagrange multipliers of the lower-level problem. The leader
minimizes its objective function, F(·), which depends on the leader’s decision, x, the optimal
decisions of the followers, ȳ, and the optimal value of the Lagrange multipliers, λ̄ and μ̄,
from the lower-level problem. The upper-level problem (20) is constrained by the functions
G(·), H (·) and the set of the optimal solutions of the followers, S (x), which is parameterized

123
Ann Oper Res

by the leader’s decision, x, and found by solving a set of m coupled problems in the lower
level (21).
⎧ ⎫

⎪ minimize F(x, ȳ, λ̄, μ̄) ⎪


⎨ x,ȳ,λ̄,μ̄ ⎪

(x e , ye , λe , μe ) solves subject to: G(x, ȳ, λ̄, μ̄) ≤ 0 (20)

⎪ H (x, ȳ, λ̄, μ̄) = 0 ⎪


⎩ ⎪

(ȳ, λ̄, μ̄) ∈ S (x)

where (ȳ, λ̄, μ̄) ∈ S (x) if and only if:


⎧ ⎫

⎨ minimize f j (x, y j , ȳ− j ) ⎪

( ȳ j , λ̄ j , μ̄ j ) solves, y j ,λ j ,μ j
subject to: g j (x, y j , ȳ− j ) ≤ 0, μ j ⎪ (21)
∀ j = 1, . . . , m ⎪ ⎩ ⎭
h j (x, y j , ȳ− j ) = 0, λ j
The ȳ-tuple is the Nash equilibrium of the followers for the leader’s decision, x. The
variables λ̄ and μ̄ represent the Lagrange multipliers for the equality and the inequality
constraints of the followers, respectively. If the lower-level problem is convex for each j-th
follower and each upper-level decision x, then, KKT (necessary and sufficient) optimality
conditions guarantee global optimality for each j-th follower problem. But the simultaneous
j-th follower’s problems may not have a solution, may have only one solution, or may have
multiple solutions because they are coupled. The set of the solutions represented by S (x) is
rewritten sometimes as an equivalent system of constraints (e.g., KKT conditions) added to the
upper-level problem (20). This system of constraints is the so-called equilibrium constraint
set. The single-leader-multiple-follower problem can be stated as an MPEC optimization
problem (Dempe 2003; Luo et al. 1996).

5.3 Multiple-leader-single-follower games

A multiple-leader-single-follower game is a case when several players (leaders) simultane-


ously anticipate the decisions of a single player (follower). Because all the leaders make
decisions at the same time, the upper-level problem is defined as finding a Nash equilibrium
of the leaders. Figure 10 illustrates the game structure. Multiple-leader-single-follower games
are appropriate for representing liberalized markets, where participants have to interact with
the market submitting offers prior to the clearance of the market. Market participants are at
the upper level and market operator is at the lower level. The Lagrange multipliers of the
lower-level problem represent, on many occasions, the price of the resource traded in the
market.
The formulation of the multiple-leader-single-follower game is given by (22) and (23).
Solving (22) and (23) means solving a set of n bilevel problems (one per each leader). Because
leaders’ problems depend on the competitors’ decisions, the set of the n problems is coupled,
which complicates the resolution of the problem. EPEC techniques (Ralph and Smeers 2006;

Fig. 10 Multiple-leader-single- x1
follower game
Leader 1 xN Leader N

x1 y xN y

Follower

123
Ann Oper Res

Hu and Ralph 2007; Zhang 2010) can be applied to solve this kind of problems. Note that, even
though the lower-level problem is common for all leaders, the response in primal and dual
variables could be different because each leader makes its own expectation of its competitors’
decisions, which affect the expected follower’s response. We have emphasized this in (22)
and (23) by using the superscript (i) for the lower-level problem variables.

⎧  e , ȳ (i) , λ̄(i) , μ̄(i)


 ⎫
⎪ minimize Fi xi , x−i ⎪

⎪ , (i) ,λ̄(i) ,μ̄(i) ⎪



x i ȳ ⎪

⎨ ⎬
(xie , y e , λe , μe ) solves, subject to:  
(i) (i)
G i  xi , x−i , ȳ , λ̄ , μ̄  ≤ 0 ⎪
e (i) (22)
∀i = 1, . . . , n ⎪


⎪ e , ȳ (i) , λ̄(i) , μ̄(i) = 0 ⎪⎪


⎩ Hi(i)xi , (i)
x−i
(i)
   ⎪

ȳ , λ̄ , μ̄ ∈ S xi , x−i e

where ( ȳ (i) , λ̄(i) , μ̄(i) ) ∈ S (xi , x−i


e ) if and only if:

⎧  e , y (i)
 ⎫

⎨ yminimize f xi , x−i ⎪

(i) ,λ(i) ,μ(i)
( ȳ (i) , λ̄(i) , μ̄(i) ) solves subject to: g xi , xe , y (i)  ≤ 0, μ(i) (23)

⎩  −i  ⎪
e , y (i) = 0, λ(i) ⎭
h xi , x−i

Thus, the lower-level problem (23) is an optimization problem parameterized by the upper-
level decisions of each of the i-th leaders. If the lower-level problem is a convex optimization
problem, it can be reformulated as a set of constraints expressed as KKT conditions. This
set of constraints is different for each i-th leader’s problem, and it should be added to every
corresponding upper-level problem. Leyffer and Munson (2010) and Kulkarni and Shanbhag
(2012) have asserted that, when the lower level represents the market operation, the Lagrange
multipliers should be the same for all leaders, i.e., λ̄(i) = λ̄, ∀i and μ̄(i) = μ̄, ∀i. This is
commonly known as price consistency, where there is no price discrimination among the
leaders. A price-consistent formulation is more restrictive than the one in (22)–(23), and it
may not have a solution although the original one has. However, a price-consistent formulation
is easier to solve than problem (22)–(23) due to the reduction in the number of variables and
constraints. A correspondence between the GNEP with shared constraints and variational
equalities (Facchinei and Pang 2003) has been proposed in Kulkarni and Shanbhag (2012)
from an economic standpoint for an equilibrium with uniform prices (price consistency). In
this context, Kulkarni and Shanbhag (2012) provide a theory that gives sufficient conditions
to prove that a solution for the equivalent variational equality problem is a refinement of the
GNEP problem with shared constraints, i.e. a subset of all equilibria for the GNEP problem
that holds the price consistency premise.

5.4 Multiple-leader-multiple-follower games

A multiple-leader-multiple-follower game is the most general instance of a bilevel game,


where several leaders competing among themselves have to make decisions in the first stage
prior to the decisions of a set of followers competing among themselves in the second stage
(see Fig. 11).
The multiple-leader-multiple-follower problem is given by a set of n coupled MPEC
problems, one for each leader, and given by (24)–(25). This problem is stated as an EPEC
(Ralph and Smeers 2006; Hu and Ralph 2007; Zhang 2010).

123
Ann Oper Res

Fig. 11 Multiple-leader- x1
multiple-follower game
Leader 1 xN Leader N

x1 y1 xN yM
y1
Follower 1 Follower M
yM

⎧ ⎫
⎪ minimize Fi (xi , x−i e , ȳ(i) , λ̄(i) , μ̄(i) ) ⎪

⎪ ⎪



(i)
xi ,ȳ(i) ,λ̄ ,μ̄(i) ⎪

⎨ ⎬
(xie , ye , λe , μe ) solves, subject to: G (x , xe , ȳ(i) , λ̄(i) , μ̄(i) ) ≤ 0
i i −i (24)
∀i = 1, . . . , n ⎪
⎪ e , ȳ(i) , λ̄(i) , μ̄(i) ) = 0 ⎪


⎪ Hi (xi , x−i ⎪


⎩ (i) ⎪

(i) (i)
(ȳ , λ̄ , μ̄ ) ∈ S (xi , x−i ) e

(i)
where (ȳ(i) , λ̄ , μ̄(i) ) ∈ S (xi , x−i
e ) if and only if:

⎧   ⎫
⎪ , e , y (i) , ȳ(i) ⎪

⎪ minimize f j x i x −i j −j ⎪

(i) (i) (i) ⎪
⎨ y (i) (i) (i)
j ,λ j ,μ j


( ȳ j , λ̄ j , μ̄ j ) solves,  
(i) (i) (i) (25)
⎪ subject to: g j xi , x−i , y j , ȳ− j  ≤ 0, μ j ⎪
e
∀ j = 1, . . . , m ⎪ ⎪

⎪ ⎪
⎩ h j xi , x−i e , y (i) , ȳ(i) = 0, λ(i) ⎪

j −j j

In Sect. 6, we describe some common techniques to solve this type of problems.

5.5 Stochastic multiple-leader-multiple-follower games

The perfect information hypothesis has been assumed in the previous Nash game definitions.
This means that all players, leaders and followers have perfect information about their com-
petitors’ payoff functions, available strategies and constraints. Additionally, all the exogenous
parameters have been assumed deterministic, although some of them are random, such as
demand or cost. In this section, we introduce stochasticity in bilevel games. In particular, we
expand the general case, the multiple-leader-multiple-follower game, to a stochastic game.
We assume the stochastic bilevel game is played in two stages. At the first stage, the
leaders make their decisions in a Nash equilibrium setting, prior to the knowledge of any
scenario realization and considering the best responses of the followers. After the leaders
make their decisions, the scenario realization of the random vector, ξ , is known at the second
stage and the followers make their decisions in a Nash equilibrium setting. Therefore, the
lower-level equilibrium is solved for any realization of the random process, ξ . Then, the set of
(equilibrium) solutions from the lower level are random variables in terms of such a random
process. If we define ξ as a random distribution to model uncertainty, the lower-level problem
variables are now (ȳ(i) (xi , x−ie , ξ ), λ̄(i) (x , xe , ξ ), μ̄(i) (x , xe , ξ )) ∈ S (x , xe , ξ ).
i −i i −i i −i
The stochastic multiple-leader-multiple-follower optimization problem is given by (26)–
(29)
⎧ ⎫

⎪ minimize E [Fi (·)] ⎪


⎪ (i)
xi ,ȳ(i) ,λ̄ ,μ̄(i) ⎪

⎨ ⎬
(xi , y , λ , μ ) solves, subject to: G i (·) ≤ 0
e e e e
(26)
∀i = 1, . . . , n ⎪
⎪ Hi (·) = 0 ⎪


⎪ ⎪

⎩ (i) e , ξ) ⎭
(ȳ(i) , λ̄ , μ̄(i) ) ∈ S (xi , x−i

123
Ann Oper Res

(i)
where (ȳ(i) , λ̄ , μ̄(i) ) ∈ S (xi , x−i
e , ξ ) are given for the random distribution, ξ , if and only

if:

⎧ ⎫
⎪ minimize f j (·) ⎪

⎨ y (i) (i) (i) ⎪

(i) j ,λ j ,μ j
(ȳ(i) , λ̄ , μ̄(i) ) solves, (i) (27)
∀ j = 1, . . . , m ⎪ subject to: g j (·) ≤ 0, μ j ⎪

⎩ (i) ⎭

h j (·) = 0, λ j

and where the variables from the second stage are defined as:

ȳ(i) = ȳ(i) (xi , x−i


e , ξ)
(i) (i)
λ̄ = λ̄ (xi , x−i e , ξ) (28)
(i) (i)
μ̄ = μ̄ (xi , x−i e , ξ)

and the payoff and constraints functions are defined as:

(i)
e , ȳ(i) , λ̄ , μ̄(i) , ξ )
Fi (·) = Fi (xi , x−i
e , ȳ(i) , λ̄(i) , μ̄(i) , ξ )
G i (·) = G i (xi , x−i
Hi (·) = Hi (xi , x−i e , ȳ(i) , λ̄(i) , μ̄(i) , ξ )
e , y (i) , ȳ(i) , ξ )
(29)
f j (·) = f j (xi , x−i j −j
(i) (i)
g j (·) = g j (xi , x−i
e , y , ȳ , ξ )
j −j
(i) (i)
h j (·) = h j (xi , x−i
e , y , ȳ , ξ )
j −j

The upper-level constraints and payoff functions are defined in terms of expectations with
respect to the random variable, ξ . This means the leaders are risk-neutral agents. Other kinds
of constraints can be used as risk measures, but this analysis is outside the scope of this
paper. The lower-level constraints and payoff functions are defined for every realization of
the random variable, ξ . When a scenario-based approach is applied, the random variable ξ is
sampled in the scenarios, indexed by ω, and the real distribution is substituted by the scenarios’
realizations, ξ(ω). Then, an equivalent deterministic optimization problem is obtained by
replacing the random variable ξ with the sampled one, ξ(ω).

5.6 Bilevel games are special cases of generalized Nash equilibrium problems

Bilevel games are special cases of GNEPs. In particular, multiple-leader games are GNEPs
because there are constraints in each leaders’ problem that involve variables of the other lead-
ers. These constraints could be upper-level constraints or equilibrium constraints. Regarding
the upper-level constraints, they are easy to understand when the problem is generalized
because the constraints for each leader depend explicitly on the competitors’ strategies.
Regarding the equilibrium constraints, it is more complex to understand their dependence on
the competitors’s decisions because they are implicit. This implicit dependence on the com-
petitors’ decisions in the lower-level problem is frequent when a common resource is traded
or shared in the lower-level problem (e.g., electricity). Leaders can submit their desires to
obtain this resource by choosing their strategies, which, at first, are not restricted. However,
the resource is distributed among leaders at the lower level, where the desires of the leaders

123
Ann Oper Res

are linked. Their distribution represents an implicit coupling constraint among the leaders.
We illustrate this fact in the next example, where the interdependence among the leaders’
decisions only occurs in the equilibrium constraints.

Example 6 Given a multiple-leader-single-follower game with two leaders and one follower,
leader 1 chooses amongst strategies x1 ∈ X 1 ⊆ R, leader 2 chooses amongst the strategies
x2 ∈ X 2 ⊆ R and the follower chooses among the strategies y ∈ Y ⊆ R. The lead-
ers’ decisions are not dependent on each other’s decisions. Let the objective functions be
F1 (x1 , x2 , y) : X 1 × X 2 × Y  → R for leader 1, F2 (x1 , x2 , y) : X 1 × X 2 × Y  → R for leader
2, and f (x1 , x2 , y) : X 1 × X 2 ×Y  → R for the follower. The multiple-leader-single-follower
game is composed of the optimization problems of the two leaders:

(x1e , x2e , y e ) solves (30)–(31)

⎧ ⎫

⎪ minimize F1 (x1 , x2e , ȳ (1) ) ⎪


⎪ (1) ⎪

⎨ x1 , ȳ ⎬
subject to: x1 ∈ X 1    (30)

⎪ minimize f x1 , x2e , y ⎪ ⎪

⎪ ȳ (1) solves y ⎪

⎩ ⎭
subject to: y ∈ Y
⎧   ⎫

⎪ minimize F2 x1e , x2 , ȳ (2) ⎪

⎪ x2 , ȳ (2)
⎪ ⎪

⎨ ⎬
subject to: x2 ∈ X 2   e   (31)

⎪ minimize f x1 , x2 , y ⎪ ⎪

⎪ ȳ (2) solves y ⎪

⎩ ⎭
subject to: y ∈ Y

The feasible region for leader 1 is defined as 1 (x1 , x2e , y) = {(x1 , y) : x1 ∈ X 1 , y ∈


S (x 1 , x 2e )}, and for leader 2 is defined as 2 (x 1e , x 2 , y) = {(x 2 , y) : x 2 ∈ X 2 , y ∈ S (x 1e , x 2 )}.
Then, the multiple-leader-single-follower game is written in a short form in equation (32).
⎧      ⎫
⎪ e (1) (1) e (1) ⎪
⎨ minimize F x , x , ȳ , s.t. x , ȳ ∈  x , x , ȳ ⎬
 e e e x1 , ȳ (1)
1 1 2 1 1 1 2
x1 , x2 , y solves      

⎩ minimize F2 x1 , x2 , ȳ
e (2)
, s.t. x2 , ȳ (2)
∈ 2 x1 , x2 , ȳ
e (2) ⎪

x2 , ȳ (2)
(32)

Figure 12 illustrates the set of available strategies for the leaders and the follower as
well as the feasible regions for both optimization problems (32). The optimal solution of the
lower-level problem has been assumed to be unique. Then, for any leaders’ decisions vector,
(x1 , x2 ), the optimal response of the follower is unique. The set (x1 , x2 , y) provides the
feasible region for the leaders and the follower for any vector (x 1 , x2 ). The set 1 (x1 , x2e , y)
represents the feasible region for leader 1 and the follower, assuming leader 2 is fixed at the
equilibrium.
Although the leaders are not restricted to any strategy, e.g. x1 ∈ X 1 , the lower-level
problem restricts the strategies that the leaders can choose (dark area in the x 1 –x2 plane
from Fig. 12). In this particular case, for simultaneous values of x 1 and x2 close to zero, the
problem becomes infeasible. Therefore, there is no solution for the EPEC in that case. For
example, this constraint could represent the use of a resource that should be supplied at a
minimum level, such as electricity consumption.

123
Ann Oper Res

Fig. 12 Strategies set for players y


Ω(x1 , x2 , y)
x1 , x2 and y

Ω1 (x1 , xe2 , y)
Y

x1

xe2
X2
x2

X1

5.7 Other bilevel games compositions

The basic element of bilevel games consists of leaders making decisions prior to the followers’
decisions, both competing among themselves. We have pointed out that when several players
are competing at the same level, they are doing it in a non-cooperative Nash equilibrium
setting. This holds in many real situations where imperfect competition arises. But different
kinds of competitive behaviors could be included at each level, as in the case of perfect
competition. When markets are not concentrated or regulators do not restrict the players’
behaviors, perfect competition may be expected.
Just to mention an example, the problem of generation expansion in power systems could
be interpreted in this way: first, the leaders (generation firms) decide their optimal generation
expansions in a Nash setting anticipating the results of the spot market. Then, the spot
market clearing process takes place at the lower level and the participants (generation firms
and system operator) act in a perfectly competitive way. See the work by Wogrin et al.
(2011b) using CV, ranging from perfect competition to Cournot competition, to analyze
the later problem within open- and close-loop equilibrium frameworks. They show that
there are correspondences between them, extending the work by Kreps and Scheinkman
(1983). In particular, close- and open-loop equilibrium models are equivalent for any level of
competition with a single load period while this equivalence does not completely hold when
having multiple load periods.

6 Solving bilevel games

Bilevel games are highly non-linear and non-convex, thus, existence and uniqueness of
equilibrium points is very difficult to prove. Even, in the simplest case, the single-leader-
single-follower game, which is modeled as a bilevel game, the problem is generally NP-hard,
i.e., no numerical solution scheme exists to solve the problem in polynomial time (Dempe
2002a).

123
Ann Oper Res

6.1 Solving MPECs

State-of-the-art algorithms to solve MPEC problems have evolved during the last decades
providing new theoretical and practical background mainly in operation-research-related
journals. Despite the advances, the foundations for the systematic solution of these problems
has not yet attained mainly due to their structure. MPEC problems are very difficult to solve
because the solution set of the lower-level problem, usually formulated as complementary
constraints, is non convex. Additionally, it is necessary to prove certain constraint qualifica-
tions to prove convergence, which rarely occurs. For example, LICQ (Linear Independence
Constraint Qualification) and MFCQ (Mangasarian-Fromowitz Constraint Qualification) fail
with complementary constraints. Hence, the global optimal solution is seldom obtained with
NLP solvers.
The standard NLP solvers are not numerically safe for solving MPECs, although they
are used for solving single-level MPEC reformulations (Luo et al. 1996). Because of the
limitations of NLP solvers, new constraint qualification definitions have been proposed to
define new stationary solutions (not necessarily global solutions) reached solving MPEC
problems by conventional NLP algorithms. For example, the W—(Weakly), C—(Clarke),
B—(Bouligand), M—(Mordukhovich) or S—stationary (Strongly stationary) constraint
qualifications are used. Schwartz (2011) defines such constraint qualifications for solving
MPCC. See the monograph on MPECs (Luo et al. 1996, Ch. 6) and the monograph on com-
plementary modeling (Gabriel et al. 2012, Ch. 8.3) for further details on NLP algorithms for
solving MPEC problems.
In order to deal with non-regular complementarity constraints, regularization and penal-
ization approaches (Ralph and Wright 2004) have been proposed. Both methods solve
sequential NLPs until convergence, where complementarity constraints are relaxed (regu-
larization) or moved to the objective function by appending a nonlinear penalty function of
the violation of the complementary constraints (penalization).
Another approach for solving MPECs is the exact mixed-integer linear programming
(MILP) reformulation of the complementary constraints at the cost of adding new binary vari-
ables and increasing the problem’s complexity. Notwithstanding, commercial MILP solvers
have been consistently showing significant performance improvement in the last decades
allowing the resolution of medium-size problems. Also, MILP reformulations are interesting
because they provide a global solution and convergence is guaranteed. One of the pioneering
works was done by Fortuny-Amat and McCarl (1981) proposing a big-M-based reformula-
tion of complementary constrains by including a binary variable for each constraint. In this
approach, a big-M term is introduced, which may be difficult to determine in certain appli-
cations, causing computational problems (Hu et al. 2008; Gabriel and Leuthold 2010). This
approach is applied to the strategic bidding problem (Ruiz and Conejo 2009; Bakirtzis et al.
2007; Gourtani et al. 2014) and to the generation capacity investment problem Kazempour
et al. (2011) and by Wogrin et al. (2011a) providing a comparison with the NLP approach.
Another possibility is to replace the complementarity conditions by using SOS1-type vari-
ables, as proposed by Siddiqui and Gabriel (2013).
MILP reformulations could be considered as enumerative methods coded by artificial
binary variables. At the time of the early work of Fortuny-Amat and McCarl (1981), Bard
and Falk (1982) proposed a branch-and-bound (B&B) algorithm to solve linear programs
with equilibrium constraints. The main idea of the algorithm was to recursively enforce
the complementarities through branching. Then, ad-hoc B&B algorithms constitute an
enumerative-based alternative for solving MPEC problems. This approach has been improved

123
Ann Oper Res

for linear/quadratic bilevel problem (Bard and Moore 1990) and the quadratic case by Al-
Khayyal et al. (1992).
Recently, there have been attempts for developing convex relaxations of the complementary
constraints. A semidefinite programming (SDP) relaxation for linear programs with equi-
librium constraints is proposed by Fampa et al. (2013). The approach is combined with the
ad-hoc B&B algorithm proposed by Bard and Moore (1990) where SDP relaxations are used
to generate bounds, helping to solve more efficiently MPEC problems. Also, linear programs
with equilibrium constraints can be recast as quadratically-constrained quadratic programs
(QCQP) where quadratic equality constraints are due to complementary constraints. Then,
SDP relaxation of the QCQP can be used for solving this problem. There is not much related
literature applied to power systems. However, there are applications to the strategic bidding
problem (Ghamkhari et al. 2016; Fampa and Pimentel 2016) showing promising solution
times compared with MILP reformulations.

6.2 Solving EPECs

EPECs are basically a set of coupled MPECs and they inherit the “bad” properties of MPECs.
They are non-convex and non-linear and finding a solution for this problem constitutes a chal-
lenge. Thus, a global solution is seldom reached. Because the constraint qualifications do
not hold for each MPEC that composes the EPEC, the solutions obtained are usually sta-
tionary. These stationary solutions may be Nash equilibria, local equilibria or saddle points.
Nevertheless, several authors have proved the existence and uniqueness of equilibrium under
several assumptions. One early study on bilevel games with multiple leaders and followers,
by Sherali (1984), shows that for certain simple cases a single equilibrium exists. Existence
and uniqueness are also proved in the context of electricity power markets in Hu and Ralph
(2007), for a deterministic case, and in DeMiguel and Xu (2009), for a stochastic case.
However, Ehrenmann (2004a) shows that, for an asymmetric game extension of the game in
Sherali (1984), a manifold of equilibria may exist. Also, Ehrenmann (2004b) ) illustrates that,
even in the simplest possible EPEC setting, local uniqueness of the solution is not guaranteed.
Nevertheless, existence (but not uniqueness) of a global Nash equilibrium has been proven
for the relaxation of the lower level when shared constraints and particular leaders’ objective
structures are considered, such as potential functions (Kulkarni and Shanbhag 2014) and
quasi-potential functions Kulkarni and Shanbhag (2015). However, Pang and Fukushima
(2005) presented numerous counterexamples where there are no general existence result for
bilevel games. But it appears that adding leaders’ conjectures about the follower equilibrium
and consistency among them helps to “regularize” the problem of existence and equilib-
rium computation (Kulkarni and Shanbhag 2013). In summary, yet to date there is deficient
understanding of the existence of a solution to an EPEC.
Finding algorithms to solve this problem constitutes an ongoing line of research. In general,
three algorithms have been suggested in the literature for solving EPECs:

– Diagonalization approach, by solving the MPECs of each player sequentially until


convergence. This approach can be further classified into two methods, Jacobi and Gauss–
Seidel method. See Hobbs et al. (2000), Cardell et al. (1997) and (Gabriel et al. 2012,
Ch. 7.2.1).
– Simultaneous solution method, by writing the strong stationary necessary conditions for
all MPECs and solving all the constraints simultaneously. The solution of this problem
is known as the strong stationary solution. See Leyffer and Munson (2010), Su (2005)
and (Gabriel et al. 2012, Ch. 9.4.2).

123
Ann Oper Res

– System of inequalities with equilibrium constraints Generally it is applied when having


an EPEC with finite number of strategies. The inequality constraints are composed by
replicating the conditions that payoff functions at the leaders’ equilibrium strategy are
always grater than any other discrete strategy decision for all players. The equilibrium
constraints from the lower-level problem are attached to the set of constraints as the
first-order optimality conditions. See examples Pozo et al. (2013a, b).
Because of the lack of guarantee a global solution for these approaches, some hybrid
methods pretend to find the “best” solution among different sets of solutions found when the
problem is solved with different starting points.

6.3 One-level reformulation of bilevel games

Within this context, to solve bilevel games, a one-level reformulation is often used. First,
we attempt to replace the lower-level problem with their equivalent first order optimality
conditions. When lower-level is assumed to be an LP problem, KKT conditions are necessary
and sufficient optimality conditions. An alternative way is to replace the lower-level problem
by the set of primal constraints, dual constraints and the strong duality theorem, which
constitute a set of first order optimal necessary and sufficient conditions.
After that, the equivalent equilibrium conditions are added as constraints to each leader’s
optimization problem. In this manner, the upper-level problem becomes a set of one-level
problems (MPECs) stated as an EPEC. In the stochastic version, the lower-level problem is
solved for all scenario realizations, ω. One set of equivalent optimal conditions per scenario
is added to each leader’s problem, becoming a set of stochastic MPECs, or a stochastic EPEC.
Example 7 presents a multiple-leader-single-follower example and its reformulation into
a system of inequalities with equilibrium constraints.

Example 7 Based on the previous multiple-leader-single-follower game (Example 6) with


two leaders and one follower, leader 1 chooses amongst strategies x 1 ∈ X 1 = [x1min , x1max ] ⊆
R, leader 2 chooses amongst the strategies x 2 ∈ X 2 = [x2min , x2max ] ⊆ R and the follower
chooses among the strategies y ∈ Y (x1 , x2 ). Let Y = {A1 x1 + A2 x2 + B(x1 , x2 )y ≤
b; C1 x1 + C2 x2 + D(x1 , x2 )y = e}. The leaders’ decisions are not dependent on each
other’s decisions, but they are dependent on the optimal dual values from the follower. Let
the objective functions be F1 (x1 , λ) = (K 1 − λ)x1 for leader 1, F2 (x2 , λ) = (K 2 − λ)x2 for
leader 2, and f (x1 , x2 , y) = c1 x1 + c2 x2 + d(x1 , x2 )y for the follower.
The follower linear lower-level problem is defined as:

minimize c1 x1 + c2 x2 + d(x1 , x2 )y (33a)


y
subject to: A1 x1 + A2 x2 + B(x1 , x2 )y ≤ b, μ (33b)
C1 x1 + C2 x2 + D(x1 , x2 )y = e, λ (33c)

where μ and λ are the Lagrange multipliers (dual variables) associated with the inequality
and equality constraints, respectively. Then, we define the Lagrange function L(y, μ, λ). We
have omitted the dependence of the leader’s decision, x, in the Lagrange function and the
variables because the decision is a known parameter for the lower-level problem.

L(y, μ, λ) = c1 x1 + c2 x2 + d(x1 , x2 )y
−μ(A1 x1 + A2 x2 + B(x1 , x2 )y − b)
−λ(C1 x1 + C2 x2 + D(x1 , x2 )y − e) (34)

123
Ann Oper Res

The lower-level problem reformulation is given by the KKT optimality conditions:


∇ y L(y, μ, λ) = d(x1 , x2 ) − B(x1 , x2 )μ − D(x1 , x2 )λ = 0 (35a)
∇μ L(y, μ, λ) = A1 x1 + A2 x2 + B(x1 , x2 )y − b ≤ 0 (35b)
∇λ L(y, μ, λ) = C1 x1 + C2 x2 + D(x1 , x2 )y − e = 0 (35c)
μ(A1 x1 + A2 x2 + B(x1 , x2 )y − b) = 0 (35d)
μ ≥ 0, λ : free (35e)
Note that, the optimal solutions, ( ȳ, μ̄, λ̄), from the original lower-level problem (33), are
also solutions from the set of constraints (35). Therefore, the lower-level optimal solutions
are represented by the parametrized set of constraints (36). Then, ( ȳ, μ̄, λ̄) ∈ S (x1 , x2 ) if
and only if:


⎪ d(x1 , x2 ) − B(x1 , x2 )μ − D(x1 , x2 )λ = 0


⎨ A1 x1 + A2 x2 + B(x1 , x2 )y − b ≤ 0
( ȳ, μ̄, λ̄) solves C1 x1 + C2 x2 + D(x1 , x2 )y − e = 0 (36)



⎪ μ(A 1 x 1 + A 2 x 2 + B(x 1 , x 2 )y − b) = 0

μ ≥ 0, λ : free
Given the leader-2 decision’s, x2 , the optimization problem for leader 1 is stated as an
MPEC problem given by (37). Actually, it is a parametrized MPEC problem on x2 .
minimize (K 1 − λ̄)x1 (37a)
x1 , ȳ,μ̄,λ̄

subject to: x1min ≤ x1 ≤ x1max (37b)


 
( ȳ, μ̄, λ̄) ∈ S x1 , x2 (37c)
Similarly to the first leader’s MPEC problem, we can reformulate an MPEC problem for
leader 2. The simultaneous solution of both problems is stated as an equilibrium and the
problem as an EPEC. Equations (38) represents the EPEC as a system of inequalities with
equilibrium constrains.
⎧   ⎫

⎪ (K 1 − λe ) x1e ≤  K 1 − λ̄(1)  x1 , ∀x1 ∈ X 1 ⎪


⎪ (K 2 − λe ) x2e ≤ K 2 − λ̄(2) x2 , ∀x2 ∈ X 2 ⎪


⎪ ⎪


⎪ ⎪

⎨ x1min ≤ x1e ≤ x1max ⎬
(x1 , x2 , y , λ , μ ) solves x2 ≤ x2 ≤ x2 
e e e e e min e max
 (38)

⎪ ⎪


⎪ ȳ e , λ̄e , μ̄e ∈ S x e , x e
  1 2 ⎪


⎪ ⎪

⎪ λ̄(1) ∈ S x1 , x2e  ,
⎪ ∀x1 ∈ X 1 ⎪

⎩ (2) ⎭
λ̄ ∈ S x1 , x2 , e ∀x2 ∈ X 2
The first two equations in (38) represent the Nash equilibrium inequalities for any possible
strategy for each leader. The last three equations represent the solutions from the lower-level
problem. Note that this is a semi-infinite set of constraints, where there is a finite number
of variables and an infinite number of constraints. If the leaders’ strategies are discrete,
therefore, finite, then, the set of constraints becomes finite and solvable, as shown by Pozo
et al. (2013a).

6.4 Manifold of lower-level solutions

Consider the stochastic single-leader-single-follower game. In order to have a unique solution,


strict convexity of the lower-level problem for each decision of the leader, x, and each

123
Ann Oper Res

realization, ω, are required. However, when the lower-level is linear (convex and concave at
the same time), KKT conditions are applicable, but a non unique (globally) optimal solution
may be reached for at least one value of x. This means that, for a given decision of the leader
x, the optimal decision of the follower is a set of decisions y(x, ω) with the same objective
function value. Then, the follower is indifferent to any of their own decisions. In other words,
the first order optimality conditions from the linear lower level are sufficient, but there could
be multiple solutions.
A bilevel problem solution is called an optimistic solution if the leader assumes that the
followers will choose the solution with the best possible outcome for the leader, ȳ. On the
contrary, a solution is called a pessimistic solution if the leader takes a pessimistic attitude
towards the outcome of the follower. If the problem has multiple followers, the solution of the
lower level could have multiple equilibria (solutions), and optimistic or pessimistic solutions
could be assumed by all the leaders. In general, most bilevel games are implicitly formulated
as optimistic. For the single-leader-single-follower game, the optimistic problem is defined as
in (17), while the pessimistic problem is defined as in (39). One can extend in a similar way a
pessimistic representation for multiple leaders and/or multiple followers. Pessimistic bilevel
models are challenging to handle even for the single-leader-single-follower case (Dempe
2002b; Dempe et al. 2014).
⎧  ⎫

⎨ minimize maximize F (x, ȳ) ⎪ ⎬
x ȳ
(x e , y e ) solves subject to: x∈X (39)

⎩ ⎪

ȳ ∈ S (x)
Manifold of lower-level optimal responses constitute a problem to the leader since the cor-
responding optimal leader’s decisions can differ significantly. Identifying a meaningful and
desirable equilibrium is challenging and remains an open problem in power systems. There
are few papers that present ideas on how to find all equilibria and how to select one of them.
For example, Hasan et al. (2008), Barroso et al. (2006), Pozo and Contreras (2011), Ruiz
et al. (2012) showed that there are multiple solutions for the strategic bidding problem and
they proposed methodologies to find all pure Nash equilibria. Selecting the one equilibrium
that is the “best” for this problem was proposed in Hasan and Galiana (2008). Similarly,
the generation expansion problem potentially admits multiple equilibria (Pozo et al. 2013a,
2014, 2017; Zerrahn and Huppmann 2014; Kazempour et al. 2013), which can impact in
practice the network planner decisions on anticipative transmission capacity planning. In fact,
Pozo et al. (2017) show that a proactive investment plan can lead to a total cost higher than
not investing at all because of the existence of multiple market-driven generation expansion
equilibria. Yet, as mentioned above, the resolution algorithms are arduous and challenging
for selecting a meaningful solution (Gabriel et al. 2012; Dempe et al. 2014).

7 Future challenges in bilevel games

Bilevel games set new challenges for power system agents and they constitute an ongoing
topic for many researchers. They represent a challenge because of the major complications
that arise from solving these models and more specifically EPEC models. Some of the main
challenges are listed below:
– Computation of the global equilibrium EPEC models are non-convex and non-linear and
they inherit the “bad” properties of MPECs that constitute an EPEC. If it is difficult to
find a global solution for MPECs, thus it is much more difficult to jointly solve MPECs

123
Ann Oper Res

parameterized by the solutions of the other MPECs. Consequently, the global solution
of EPECs is seldom reached. The obtained solution may be a Nash equilibrium, a local
equilibrium or a saddle point.
– Mixed versus pure equilibria Bilevel games may not have solutions in pure strategies, but
may have them in mixed strategies. However, computing mixed strategies constitutes a
big challenge for games with more than two players. On the other hand, mixed strategies
may have no straightforward interpretation in many contexts.
– Multiple equilibria MPECs and EPECs may have multiple equilibria, but, in general,
most algorithms are only able to find just one. For example, supply function equilibria in
the strategic bidding problem have multiple solutions because there are multiple supply
functions to reach the same results. Algorithms to compute all equilibria are not common
and, in most cases, they need to express the game in normal form and solve a polynomial
system of equations (Datta 2010; Yang et al. 2012). This involves building the game in
normal form, which may not be possible for continuous decisions.
– Tractability In general, EPECs show a lack of tractability for solving large problems. It is
desirable to search for new and specific decomposition techniques for EPEC problems.
– Economic consistency Most of the works about bilevel models have some underlying
assumptions, such as perfect information (the players know about the profit function or the
set of strategies of the competitors) or rationality (a player always acts in a rational way).
However, these assumptions can be argued. Furthermore, because of the mathematical
properties of EPEC models, a solution approach should pay attention to the Lagrange
multipliers used in a market environment. In addition, when there is uncertainty, players
want to manage their risks and a the concept of equilibria under risk (Kannan et al.
2013; Ralph and Smeers 2011) comes out, complicating the economic interpretation, the
solution approach, and the tractability of the problem. Economic consistency is related
with the conjectures about markets, decision makers and the simplifications of their
relationships that should be carefully considered for modeling the interaction among all
participants.

8 Summary and conclusions

In this paper, we provide an overview of the statements and insights concerning the use
of equilibrium models in power systems under the restructured environment. Equilibrium
problems appear to be natural for modeling competition. In general, energy sector is driven
by market fundamentals. Hence, the energy sector is a rich source of game problems that entail
solving simultaneous optimization problems of multiple interacting players. The emphasis of
this paper is on the structure of non-cooperative equilibrium models, insights about solution
existence, uniqueness and algorithms for solving non-cooperative bilevel games. One-level
games, such as Nash equilibrium or GNEP, are presented in this paper as a bridge to introduce
bilevel games in order to understand the complexities of hierarchical games.
Bilevel games and their application for modeling operational and planning problems in
restructured power systems has received growing attention in the latest years. Such games
are well fitted to model hierarchical behaviour, but they are hard to solve in general. Bilevel
games are generally modeled as MPEC, EPEC or bilevel optimization models within the
operations research field. In general, bilevel problems are highly non-linear and non-convex,
and the existence of global and unique solutions is not guaranteed. Although some interesting
solution algorithms have been proposed for solving MPECs and EPECs, a generalized theory

123
Ann Oper Res

and solution algorithms have not been firmly established so far, especially for EPECs. Only
a few and specific instances of EPECs have been shown to have equilibria. In many of
these instances, the solution is stated as a stationary equilibrium, which is not necessarily a
global solution. Such stationary solutions may be global, local, or saddle points. Additionally,
uniqueness is not guaranteed for EPECs in general. In fact, a manifold of equilibria is a feature
of many EPECs, but most algorithms for solving EPECs only provide a single solution. The
difficulties both from a theoretical and a numerical point of view arise because EPEC problems
inherit the bad properties of the set of MPEC problems that conform the corresponding EPEC.
Bilevel games set new challenges for power system participants and they constitute an
ongoing topic for many researchers. One major challenge of bilevel formulations is to solve
the equilibrium constraints, since they are optimization problems nested into a set of opti-
mization problems.

References
Al-Khayyal, F. A., Horst, R., & Pardalos, P. M. (1992). Global optimization of concave functions subject to
quadratic constraints: An application in nonlinear bilevel programming. Annals of Operations Research,
34(1), 125–147.
Arrow, K. J., & Debreu, G. (1954). Existence of an equilibrium for a competitive economy. Econometrica:
Journal of the Econometric Society, 22(3), 265–290.
Arroyo, J. (2010). Bilevel programming applied to power system vulnerability analysis under multiple con-
tingencies. IET Generation, Transmission and Distribution, 4(2), 178–190.
Bakirtzis, A., Ziogos, N., Tellidou, A., & Bakirtzis, G. (2007). Electricity producer offering strategies in day-
ahead energy market with step-wise offers. IEEE Transactions on Power Systems, 22(4), 1804–1818.
Baldick, R., Grant, R., & Kahn, E. (2004). Theory and application of linear supply function equilibrium in
electricity markets. Journal of Regulatory Economics, 25(2), 143–167.
Baldick, R., & Hogan, W. (2001). Capacity constrained supply function equilibrium models of electricity mar-
kets: Stability, non-decreasing constraints, and function space iterations. Technical Report, University
of California Energy Institute.
Bard, J. F., & Falk, J. E. (1982). An explicit solution to the multi-level programming problem. Computers and
Operations Research, 9(1), 77–100.
Bard, J. F., & Moore, J. T. (1990). A branch and bound algorithm for the bilevel programming problem. SIAM
Journal on Scientific and Statistical Computing, 11(2), 281–292.
Barroso, L., Carneiro, R., Granville, S., Pereira, M., & Fampa, M. (2006). Nash equilibrium in strategic
bidding: A binary expansion approach. IEEE Transactions on Power Systems, 21(2), 629–638.
Birge, J., & Louveaux, F. (2011). Introduction to Stochastic Programming. Springer Series in Operations
Research and Financial Engineering. Berlin: Springer.
Cardell, J., Hitt, C., & Hogan, W. (1997). Market power and strategic interaction in electricity networks.
Resource and Energy Economics, 19(1), 109–137.
Colson, B., Marcotte, P., & Savard, G. (2007). An overview of bilevel optimization. Annals of Operations
Research, 153(1), 235–256.
Conejo, A., Carrión, M., & Morales, J. (2010). Decision making under uncertainty in electricity markets.
Berlin: Springer.
Contreras, J., Klusch, M., & Krawczyk, J. (2004). Numerical solutions to Nash-Cournot equilibria in coupled
constraint electricity markets. IEEE Transactions on Power Systems, 19(1), 195–206.
Datta, R. (2010). Finding all Nash equilibria of a finite game using polynomial algebra. Economic Theory,
42(1), 55–96.
David, A., & Wen, F. (2001). Market power in electricity supply. IEEE Transactions on Energy Conversion,
16(4), 352–360.
Day, C., Hobbs, B., & Pang, J. (2002). Oligopolistic competition in power networks: A conjectured supply
function approach. IEEE Transactions on Power Systems, 17(3), 597–607.
de la Torre, S., Contreras, J., & Conejo, A. (2004). Finding multiperiod Nash equilibria in pool-based electricity
markets. IEEE Transactions on Power Systems, 19(1), 643–651.
DeMiguel, V., & Xu, H. (2009). A stochastic multiple-leader Stackelberg model: Analysis, computation, and
application. Operations Research, 57(5), 1220–1235.

123
Ann Oper Res

Dempe, S. (2002a). Foundations of bilevel programming. Berlin: Springer.


Dempe, S. (2002b). Foundations of bilevel programming. Dordrecht: Kluwer Academic Publishers.
Dempe, S. (2003). Annotated bibliography on bilevel programming and mathematical programs with equilib-
rium constraints. Optimization, 52(3), 333–359.
Dempe, S., Mordukhovich, B., & Zemkoho, A. B. (2014). Necessary optimality conditions in pessimistic
bilevel programming. Optimization: A Journal of Mathematical Programming and Operations Research,
63(4), 505–533.
De Wolf, D., & Smeers, Y. (1997). A stochastic version of a Stackelberg-Nash-Cournot equilibrium model.
Management Science, 43, 190–197.
Ehrenmann, A. (2004a). Manifolds of multi-leader cournot equilibria. Operations Research Letters, 32(2),
121–125.
Ehrenmann, A. (2004b). Equilibrium problems with equilibrium constraints and their application to electricity
markets. Ph.D. thesis, Citeseer
Facchinei, F., Fischer, A., & Piccialli, V. (2007). On generalized Nash games and variational inequalities.
Operations Research Letters, 35(2), 159–164.
Facchinei, F., & Pang, J. S. (2003). Finite-dimensional variational inequalities and complementarity problems.
Springer series in operations research and financial engineering. New York: Springer.
Fampa, M., Melo, W. A., & Maculan, N. (2013). Semidefinite relaxation for linear programs with equilibrium
constraints. International Transactions in Operational Research, 20(2), 201–212.
Fampa, M., & Pimentel, W. (2016). Linear programing relaxations for a strategic pricing problem in electricity
markets. International Transactions in Operational Research, 24, 159–172.
Fortuny-Amat, J., & McCarl, B. (1981). A representation and economic interpretation of a two-level program-
ming problem. The Journal of the Operational Research Society, 32(9), 783–792.
Fudenberg, D., & Tirole, J. (1991). Game theory. Cambridge: MIT Press.
Gabriel, S. A., Conejo, A. J., Fuller, J. D., Hobbs, B. F., & Ruiz, C. (2012). Complementarity modeling in
energy markets. Berlin: Springer.
Gabriel, S. A., & Leuthold, F. U. (2010). Solving discretely-constrained MPEC problems with applications in
electric power markets. Energy Economics, 32(1), 3–14.
Garcés, L., Conejo, A., García-Bertrand, R., & Romero, R. (2009). A bilevel approach to transmission expan-
sion planning within a market environment. IEEE Transactions on Power Systems, 24(3), 1513–1522.
García-Alcalde, A., Ventosa, M., Rivier, M., Ramos, A., & Relaño, G.(2002). Fitting electricity market models:
A conjectural variations approach. In 14th power systems computation conference (PSCC 2002).
Ghamkhari, M., Sadeghi-Mobarakeh, A., & Mohsenian-Rad, H. (2016). Strategic bidding for producers
in nodal electricity markets: A convex relaxation approach. IEEE Transactions on Power Systems
(accepted).
Gourtani, A., Pozo, D., Vespucci, M. T., & Xu, H. (2014). Medium-term trading strategy of a dominant
electricity producer. Energy Systems, 5(2), 323–347.
Harker, P. (1991). Generalized Nash games and quasi-variational inequalities. European Journal of Operational
Research, 54(1), 81–94.
Hasan, E., & Galiana, F. (2008). Electricity markets cleared by merit order? Part II: Strategic offers and market
power. IEEE Transactions on Power Systems, 23(2), 372–379.
Hasan, E., Galiana, F., & Conejo, A. (2008). Electricity markets cleared by merit order? Part I: Finding the
market outcomes supported by pure strategy nash equilibria. IEEE Transactions on Power Systems, 23(2),
361–371.
Hayashi, S., Yamashita, N., & Fukushima, M. (2005). Robust Nash equilibria and second-order cone comple-
mentarity problems. Journal of Nonlinear and Convex Analysis, 6(2), 283–296.
Hobbs, B. (1986). Network models of spatial oligopoly with an application to deregulation of electricity
generation. Operations Research, 34(3), 395–409.
Hobbs, B. (2002). Linear complementarity models of Nash-Cournot competition in bilateral and POOLCO
power markets. IEEE Transactions on Power Systems, 16(2), 194–202.
Hobbs, B., Metzler, C., & Pang, J. (2000). Strategic gaming analysis for electric power systems: An MPEC
approach. IEEE Transactions on Power Systems, 15(2), 638–645.
Hu, J., Mitchell, J. E., Pang, J. S., Bennett, K. P., & Kunapuli, G. (2008). On the global solution of linear
programs with linear complementarity constraints. SIAM Journal on Optimization, 19(1), 445–471.
Hu, X., & Ralph, D. (2007). Using EPECs to model bilevel games in restructured electricity markets with
locational prices. Operations Research, 55(5), 809–827.
Jing-Yuan, W., & Smeers, Y. (1999). Spatial oligopolistic electricity models with Cournot generators and
regulated transmission prices. Operations Research, 47(1), 102–112.

123
Ann Oper Res

Kannan, A., Shanbhag, U. V., & Kim, H. M. (2013). Addressing supply-side risk in uncertain power markets:
Stochastic Nash models, scalable algorithms and error analysis. Optimization Methods and Software,
28(5), 1095–1138.
Kazempour, J., Conejo, A., & Ruiz, C. (2011). Strategic generation investment using a complementarity
approach. IEEE Transactions on Power Systems, 26(2), 940–948.
Kazempour, J., Conejo, A., & Ruiz, C. (2013). Generation investment equilibria with strategic producers—Part
I: Formulation. IEEE Transactions on Power Systems, 28(3), 2613–2622.
Kirschen, D. S., & Strbac, G. (2004). Fundamentals of power system economics. New York: Wiley.
Kreps, D., & Scheinkman, J. (1983). Quantity precommitment and bertrand competition yield Cournot out-
comes. The Bell Journal of Economics, 14(2), 326–337.
Kulkarni, A. A. (2011). Generalized Nash games with shared constraints: Existence, efficiency, refinement
and equilibrium constraints. Ph.D. thesis, University of Illinois at Urbana-Champaign.
Kulkarni, A. A., & Shanbhag, U. V. (2012). On the variational equilibrium as a refinement of the generalized
Nash equilibrium. Automatica, 48(1), 45–55.
Kulkarni, A. A., & Shanbhag, U. V. (2013). On the consistency of leaders’ conjectures in hierarchical games.
In 2013 IEEE 52nd annual conference on decision and control (CDC) (pp. 1180–1185). IEEE.
Kulkarni, A. A., & Shanbhag, U. V. (2014). A shared-constraint approach to multi-leader multi-follower
games. Set Valued and Variational Analysis, 22(4), 691–720.
Kulkarni, A. A., & Shanbhag, U. V. (2015). An existence result for hierarchical Stackelberg v/s Stackelberg
games. IEEE Transactions on Automatic Control, 60(12), 3379–3384.
Lee, K., & Baldick, R. (2003). Solving three-player games by the matrix approach with application to an
electric power market. IEEE Transactions on Power Systems, 18(4), 1573–1580.
Leyffer, S., & Munson, T. (2010). Solving multi-leader-common-follower games. Optimization Methods and
Software, 25(4), 601–623.
Liu, Y., Ni, Y., & Wu, F. (2004). Existence, uniqueness, stability of linear supply function equilibrium in
electricity markets. In IEEE power engineering society general meeting, Denver (pp. 249–254).
Luo, Z., Pang, J., & Ralph, D. (1996). Mathematical Programs with equilibrium constraints. Cambridge:
Cambridge University Press.
Monderer, D., & Shapley, L. (1996). Potential games. Games and Economic Behaivor, 14, 124–143.
Nash, J. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences,
36(1), 48–49.
Neuhoff, K., Barquin, J., Boots, M. G., Ehrenmann, A., Hobbs, B. F., Rijkers, F. A. M., et al. (2005).
Network-constrained Cournot models of liberalized electricity markets: The devil is in the details. Energy
Economics, 27(3), 495–525.
Pang, J., & Fukushima, M. (2005). Quasi-variational inequalities, generalized Nash equilibria, and multi-
leader-follower games. Computational Management Science, 2(1), 21–56.
Pozo, D., & Contreras, J. (2011). Finding multiple Nash equilibria in pool-based markets: A stochastic EPEC
approach. IEEE Transactions on Power Systems, 26(3), 1744–1752.
Pozo, D., Contreras, J., & Sauma, E. (2013a). If you build it, he will come: Anticipative power transmission
planning. Energy Economics, 36, 135–146.
Pozo, D., Sauma, E. E., & Contreras, J. (2013b). A three-level static MILP model for generation and trans-
mission expansion planning. IEEE Transactions on Power Systems, 28(1), 202–210.
Pozo, D., Sauma, E. E., & Contreras, J. (2014). Impacts of network expansion on generation capacity expansion.
In 18th Power systems computation conference (PSCC 2014). Wroclav, Poland.
Pozo, D., Sauma, E., & Contreras, J. (2017). When doing nothing may be the best investment action: Pessimistic
anticipative power transmission planning. Applied Energy (under review).
Ralph, D., & Smeers, Y. (2006). EPECs as models for electricity markets. In Power systems conference and
exposition (PSCE’06), Atlanta, GA, USA (pp. 74–80).
Ralph, D., & Smeers, Y. (2011). Pricing risk under risk measures: An introduction to stochastic-endogenous
equilibria. Available at SSRN 1903897.
Ralph, D., & Wright, S. (2004). Some properties of regularization and penalization schemes for MPECs.
Optimization Methods and Software, 19(5), 527–556.
Rockafellar, R., & Uryasev, S. (2000). Optimization of conditional value-at-risk. Journal of Risk, 2, 21–42.
Rosen, J. (1965). Existence and uniqueness of equilibrium points for concave n-person games. Econometrica:
Journal of the Econometric Society, 33(3), 520–534.
Ruiz, C., & Conejo, A. (2009). Pool strategy of a producer with endogenous formation of locational marginal
prices. IEEE Transactions on Power Systems, 24(4), 1855–1866.
Ruiz, C., Conejo, A., & Smeers, Y. (2012). Equilibria in an oligopolistic electricity pool with stepwise offer
curves. IEEE Transactions on Power Systems, 27(2), 752–761.

123
Ann Oper Res

Sauma, E. E., & Oren, S. S. (2006). Proactive planning and valuation of transmission investments in restructured
electricity markets. Journal of Regulatory Economics, 30(3), 261–290.
Schwartz, A.(2011). Mathematical programs with complementarity constraints: Theory, methods and appli-
cations. Ph.D. thesis, Institute of Applied Mathematics and Statistics, University of Wrzburg.
Sherali, H. D. (1984). A multiple leader Stackelberg model and analysis. Operations Research, 32(2), 390–404.
Siddiqui, S., & Gabriel, S. A. (2013). An SOS1-based approach for solving MPECs with a natural gas market
application. Networks and Spatial Economics, 13(2), 205–227.
Smeers, Y. (1997). Computable equilibrium models and the restructuring of the European electricity and gas
markets. The Energy Journal, 18(4), 1–32.
Song, Y., Ni, Y., Wen, F., Hou, Z., & Wu, F. (2003). Conjectural variation based bidding strategy in spot markets:
Fundamentals and comparison with classical game theoretical bidding strategies. Electric Power Systems
Research, 67(1), 45–51.
Su, C. (2005). Equilibrium problems with equilibrium constraints: Stationarities, algorithms, and applications.
Ph.D. thesis, Stanford University.
Ventosa, M., Baillo, A., Ramos, A., & Rivier, M. (2005). Electricity market modeling trends. Energy policy,
33(7), 897–913.
von Stackelberg, H. (1934). Marktform und gleichgewicht. Vienna: J. Springer.
Weber, J. D., & Overbye, T. J. (1999). A two-level optimization problem for analysis of market bidding
strategies. IEEE power engineering society summer meeting, Edmonton, AB, Canada (pp. 682–687).
Wogrin, S., Centeno, E., & Barqun, J. (2011a). Generation capacity expansion in liberalized electricity markets:
A stochastic MPEC approach. IEEE Transactions on Power Systems, 26(4), 2526–2532.
Wogrin, S., Hobbs, B., Ralph, D., Centeno, E., & Barquin, J. (2011b). Open versus closed loop capacity
equilibria in electricity markets under perfect and oligopolistic competition. Mathematical Programming
(Series B), 140(2), 295–322.
Xu, H. (2005). An MPCC approach for stochastic Stackelberg-Nash-Cournot equilibrium. Optimization, 54(1),
27–57.
Xu, H., & Zhang, D. (2013). Stochastic Nash equilibrium problems: Sample average approximation and
applications. Computational Optimization and Applications, 55(3), 597–645.
Yang, Y., Zhang, Y., Li, F., & Chen, H. (2012). Computing all Nash equilibria of multiplayer games in electricity
markets by solving polynomial equations. IEEE Transactions on Power Systems, 27(1), 81–91.
Zerrahn, A., & Huppmann, D. (2014). Network expansion to mitigate market power: How increased integration
fosters welfare. Report.
Zhang, X. (2010). Restructured electric power systems: Analysis of electricity markets with equilibrium models.
IEEE press series on power engineering. New york: Wiley.

123

You might also like