You are on page 1of 18

Strategic interaction in trend-driven dynamics

Paolo Dai Pra, Elena Sartori

and Marco Tolotti

March 2013

Abstract
We propose a discrete-time stochastic dynamics for a system of many interacting agents. At
each time step agents aim at maximizing their individual payoff, depending on their action, on
the global trend of the system and on a random noise; frictions are also taken into account. The
equilibrium of the resulting sequence of games gives rise to a stochastic evolution. In the limit
of infinitely many agents, a law of large numbers is obtained; the limit dynamics consist in an
implicit dynamical system, possibly multiple valued. For a special model, we determine the
phase diagram for the long time behavior of these limit dynamics and we show the existence of
a phase, where a locally stable fixed point coexists with a locally stable periodic orbit.
Keywords: Mean-field interaction, multi-agent models, phase transition, strategic games.

Introduction

In recent years, modeling interactions in social sciences has become a crucial field of research. In
particular, the increasing awareness that systemic risk may lead to large loss in financial markets,
has revealed that the effects of contagion were typically underestimated, stimulating the search for
models that could allow reliable predictions. It would be impossible to faithfully account for the
several lines of research that have dealt with this problem; we rather limit ourselves to a specific
class of models, namely those with a mean-field interaction. In these models the interaction is all to
all, i.e., is not subject to any geometrical constraint; this is usually unrealistic in physical systems,
where the interaction forces have short range, but may be reasonable in some social systems, where
the fast spread of information prevails over local interactions.
From the mathematical point of view, the study of diffusions and spin systems with mean-field
interaction dates back to [8],[10]; many theoretical issues, such as stability, metastability, criticality,
are still under investigation (see, for example, [2]). For applications to social sciences, the reader
can refer to [1], [3], [6] or [9]. Most cited models are formulated in terms of Markov dynamics,
where the parameters (the drift for diffusions or the spin-flip rate for spin systems) are deterministic
functions of the current state; the mean-field assumption in the interaction corresponds to the fact

Department of Mathematics, University of Padova, 63, Via Trieste, I - 35121 Padova, Italy; daipra@math.unipd.it
Department of Management, Ca Foscari University of Venice, S.Giobbe - Cannaregio 873, I - 30121 Venice, Italy;
esartori@unive.it

Department of Management, Ca Foscari University of Venice, S.Giobbe - Cannaregio 873, I - 30121 Venice, Italy;
tolotti@unive.it

that these functions are invariant by permutations of the components of the system. The aim of
this paper is to propose a different updating mechanism for the dynamics. In order to motivate
our model, we, first, consider a very simple version of the traditional mean-field dynamics.
Consider a system (t) = (1 (t), 2 (t), . . . , N (t)) of N spins i (t) {1, 1}, evolving in
discrete-time t = 0, 1, 2, . . .. We consider Markov dynamics, where, at any time t 1, spins are
simultaneously and independently updated. Models with sequential updating, i.e., where at most
one spin flips at a given time, could also be considered, or models in continuous time. Many basic
properties do not really depend on this choice. Here it is convenient to choose the simultaneous,
or parallel, updating, for the modifications of the model that we will later propose.
Simultaneous, independent updating and permutation invariance strongly limit the choice of
the transition probabilities, that we assume to be of the following form:
P((t + 1) = | (t) = ) =

N
Y

P(i (t + 1) = i | (t) = ),

(1.1)

i=1

and
P(i (t + 1) = i | (t) = ) =
where

exp[i f (mN ())]


,
2 cosh[i f (mN ())]

(1.2)

N
1 X
i ,
N

mN () :=

i=1

C 1 -function

> 0 and f is a
with strictly positive first derivative. Setting mN (t) := mN ((t)),
it is easy to check that {mN (t) : t 0} is itself a Markov process and its dynamics are nearly
deterministic for N large: if mN (0) converges to m(0) in probability, then mN (t) converges to m(t)
in probability for every t > 0, where m(t) solves
m(t + 1) = tanh(f (m(t))).

(1.3)

At the infinite volume limit (for N +), the long time behavior of the system can easily be
studied: there exists a critical value c > 0 such that (1.1) has a unique, globally stable equilibrium
for < c , while for > c multiple locally stable equilibria emerge.
At the microscopic level (N < +) these dynamics can be formulated in terms of a payoff
maximization (see [5]). Suppose i (t) is the action of the i-th agent at time t: she can decide
whether to be part (i (t) = 1) or not to be part (i (t) = 1) of a project (an investment, a
political party or any other binary choice). The agents act strategically playing in a simultaneous
way: at time t + 1 each of them chooses the action i (t + 1), in order to maximize her own (random)
payoff, given by
Ui (i (t + 1); (t)) := i (t + 1) [f (mN (t)) + i (t)] ,
(1.4)
where {i (t) : i = 1, . . . , N, t 0} is a family of i.i.d. random variables with distribution
(x) := P( x) =

1
,
1 + e2x

Note that the payoff Ui for the i-th agent depends on:
2

> 0.

(1.5)

her action i (t + 1);


all actions j (t) at the previous time t, through the participation rate mN (t);
a local random noise i (t).
Given (t), the chosen action will be i (t + 1) = 1 if and only if f (mN (t)) + i (t) > 0, which occurs
with probability as in (1.2). By the independence of the noises i , this produces the stochastic
evolution (1.1) - (1.2), which is therefore formulated as a sequence of static stochastic optimizations.
The purpose of this paper is to modify this optimization mechanism, in which each agent
independently chooses her action on the bases of the previous participation rate. The modification
acts at two levels:
1. we generalize the payoff, by adding a term depending on the trend, i.e., on the variation of
the participation rate, and a friction, or transaction cost; this may be quite appropriate in
many applications;
2. agents choose their action simultaneously but not independently, on the basis of their forecast
of the future value of the participation rate.
More precisely, the payoff of the i-th agent is
Ui (i (t + 1), i (t + 1); (t))
= i (t + 1) [f (mN (t + 1)) + g(mN (t + 1) mN (t)) + i (t) + i (t)] ,

(1.6)

where g is a given continuous, increasing function, which favors the trend imitation, > 0
represents the friction and i (t + 1) := (j (t + 1))j6=i . Note that, unlike in (1.4), the payoff of
the i-th agent depends on the simultaneous actions of the other players. It is, therefore, nontrivial
to define the optimal actions. We follow, in this respect, a standard game-theoretical approach.
Suppose the actions (t) are given.
The local noise i (t) is observable by the i-th agent, but not by the others. Thus i (t + 1)
must be (i (t))-measurable, i.e., a measurable function of i (t).
Agents know the common distribution of the noise. Accordingly, they aim at a Nash equilibrium, i.e., a vector of actions (t+1) satisfying the following property: for each i = 1, 2, . . . , N ,
i (t + 1) maximizes over {1, 1} the function


si 7 Ui (si , i (t + 1); (t)) := Ei Ui (si , i (t + 1); (t)) ,
(1.7)
where Ei denotes the expectation with respect to the joint distribution of i (t) := (j (t))j6=i .
Note that this expectation is justified by the fact that i (t + 1) is a function of i (t), which
is not observed by the i-th agent.
The stochastic dynamics ((t))t0 we study in this paper result from the sequence of these stochastic games. Note that it is not a priori well defined: there may be no Nash equilibrium at all or even
multiple Nash equilibria may appear. In Section 2 we establish the existence of at least one Nash
equilibrium, which gives rise to possibly multiple valued dynamics. Section 3 is devoted to the
3

study of the dynamics in the limit of infinitely many agents, based on a law of large numbers. In
the limit the dynamics are solutions of a deterministic implicit recursion, though possibly multiple
valued. In Section 4 we study the long time behavior of the dynamics with infinitely many players,
for the special case in which the function f 0 and some minimal regularities on the function g
are guaranteed. In Section 5 we develop in detail the baseline case, where g is linear. Depending
on the values of the parameters of the model, the limit dynamics can have a locally stable fixed
point, a locally stable periodic orbit of period 2, or coexistence of the two. We, also, perform some
numerical simulations for a large, but finite, number of agents; not surprisingly, in the coexistence
region of the parameters, we see a random regime switching behavior, in which periods of stable
participation rates are separated by periods of fast fluctuations.

Existence of Nash equilibria

In what follows, we make the following assumptions on the model.


(a) The functions f : [1, 1] R, g : [2, 2] R are either identically zero or of class C 1 , with
strictly positive first derivatives.
(b) The random variables {i (t) : i = 1, 2, . . . , N, t 0} are independent with common distribution ; moreover, is a continuous distribution (i.e., with no point mass).
Let write (x) rather than (, x]. Assume that (t) is given and consider the stochastic game
defined in (1.5), (1.6) and (1.7).
Proposition 2.1. At least one Nash equilibrium exists.
Proof. Assume (t+1) is a Nash equilibrium, corresponding to the noise vector (t). By definition
of Nash equilibrium, i (t + 1) = 1 if and only if
Ei [f (mN (t + 1)) + g(mN (t + 1) mN (t))] + i (t) + i (t) > 0.

(2.1)

In principle, it could be i (t + 1) = 1 even though the above inequality is an equality; this, however,
has probability zero to occur, whereas i (t) has continuous distribution and can be ignored. Since
inequality (2.1) persists for increasing values of i (t), i (t + 1) must be an increasing function of
i (t) as well; hence, it can be identified with the threshold value i (t) R := R {, +},
defined by
i (t + 1) = 2 1(i (t),+) (i (t)) 1,
(2.2)
where 1S denotes the indicator function of a set S. Thus, the set of actions for this game can be
N
N
identified with R , which is compact and convex. Now, consider a vector R and fix an index
i. Suppose that the actions j = 2 1(j ,+) (j (t)) 1 are given for j 6= i and that the player i
aims at maximizing her own utility


Ei Ui (si , i (t + 1); (t))
" (
!
!
)#
X
X
s
1
s
1
i
i
= Ei si f
+
j + g
+
j mN (t) + i (t) + i (t) ,
N
N
N
N
j6=i

j6=i

whose maximum is attained at i = 2 1(i ,+) (i (t)) 1, with


"
i = i (t) Ei f

1 X
1
+
j
N
N
j6=i

1
1 X
+g
+
j mN (t)
N
N

!#
.

(2.3)

j6=i

If, in this last formula, we write j = 2 1(j ,+) (j (t)) 1, one notes that i is a function i ()
of the vector (actually of i ). Given two vectors and 0 , one sees from (2.3) that
!
n
[
|i () i ( 0 )| c P
{min(i , i0 ) i max(i , i0 )} ,
i=1

where c > 0 is a suitable constant. By continuity of the distribution , the probability on the right
hand side goes to zero as 0 . Thus, the map 7 , that in game theory is called the best
response map, is continuous. Being defined in a compact and convex set, by standard fixed-point
argument, the best response map has a fixed point. It is easily checked that fixed points of the
best response map are the Nash equilibria.
Equations (1.5), (1.6) and (1.7) define a sequence of stochastic games that have to be solved
recursively. Proposition 2.1 shows that a Nash equilibrium exists, but does not guarantee uniqueness. We assume that a specific Nash equilibrium is selected via a non anticipative criterion: more
precisely, for every t 0, the vector of actions (t) is stochastically independent of the random
variables i (t) : i = 1, . . . , N .

Dynamics with infinitely many agents

We consider a probability space (, F, P), where a family of i.i.d. random variables {i (t) : i
1, t 0} is defined and denote by the common distribution function. Note that in this space we
can simultaneously define the above stochastic games with any number of players. To emphasize
the dependence on the number of agents, we denote by (N ) (t) one Nash equilibrium at time t
and (N ) (t) the associated thresholds. We assume the initial condition (N ) (0) is given and it is
deterministic.
Theorem 3.1. The sequence of stochastic processes (mN (t))t0 is tight; moreover, any weak limit
(m(t))t0 obeys with probability one the implicit recursion
m(t + 1) =[1 + m(t)] (f (m(t + 1)) + g(m(t + 1) m(t)) + )
+ [1 m(t)] (f (m(t + 1)) + g(m(t + 1) m(t)) ) 1.

(3.1)

Proof. Tightness follows from the fact that mN (t) takes values in the compact set [1, 1]. Then,
consider a convergent subsequence, that we still denote by (mN ) and fix t 0. By definition of
Nash equilibrium, setting H = 2 1(0,+) 1, the following identity holds for i = 1, 2, . . . , N :
(N )

(N )

(t + 1) = H(Ei [f (mN (t + 1)) + g(mN (t + 1) mN (t))] + i

(t) + i (t)),

yielding
mN (t + 1) =

N
1 X
(N )
H(Ei [f (mN (t + 1)) + g(mN (t + 1) mN (t))] + i (t) + i (t)).
N

(3.2)

i=1

(N )

We denote by i

(t) R the threshold such that


(N )

(t + 1) = 2 1((N ) (t),+) (i (t)) 1.


i

By possibly considering a further subsequence, we can assume the sequence of empirical measures
!
N
1 X n
o
N :=
(N ) (t),(N ) (t)
N
i
i
i=1

admits a joint weak limit. Note that N is a sequence of probability measures on R {1, 1}. The
space of these probabilities, provided with the weak topology, is metrizable as a compact Polish
space. By Skorohod Theorem (see e.g. Theorem 2.2.2 in [4]), the sequence N can be realized
on a Probability space (1 , F1 , P1 ), in such a way that N almost surely. We denote by N
(resp. N ) the marginal of N on R (resp. {1, 1}). With probability one, N is the sum of
(N )
(N )
N point masses of weight N1 , that we still denote with (i , i )N
i=1 . Now, let (2 , F2 , P2 ) be
a probability space supporting a sequence (i (t))i1 of i.i.d. random variables with distribution
. We set := 1 2 , provided with the product -field F1 F2 and the product measure
P := P1 P2 . We will denote by E, E1 , E2 the corresponding expectations, while Ei2 will denote the
E2 - expectation conditioned to i (t). In this way, all objects appearing in (3.2) have been realized
on , without modifying their joint distribution. In particular,
N
1 X
(N )
mN (t + 1) =
(i , i ),
N
i=1

with (, ) = 2 1(,+) () 1. Set


Z
() := (, )(d) = 2((, +)) 1.
Note that,R since we have assumed to be a continuous distribution, is continuous. Thus, the
sequence dN converges almost surely. Moreover, by independence of the i (t),
"
E

mN (t + 1)
=

dN

2 #


io
1 X n h (N )
(N )
(N )
(N )
(
)
(
,

(
)
E
E
(
,

1
2
i
j
i
i
j
j
N2
i,j
 
 
1 X
C
(N )
(N ) 2
= 2
E1 E2 (i , i ) (i )

N
N
i

(3.3)

R
for a suitable constant C > 0. This shows that mN (t + 1) dN converges to zero in L2
and, therefore, almost surely along a subsequence. Thus, along a suitable subsequence, mN (t + 1)
converges almost surely to a random variable m(t + 1) that, being also the almost sure limit of
R
dN , is F1 -measurable.
Now, we aim at taking the almost sure limit in (3.2). For this purpose, let {H : > 0} be a
family of bounded Lipschitz functions such that H = inf H .
By continuity of H , f, g and the fact that m(t + 1) is F1 -measurable , if the limit of
N
1 X
(N )
H ([f (m(t + 1)) + g(m(t + 1) m(t))] + i (t) + i (t))
N

(3.4)

i=1

R
exists almost surely, where m(t) is the almost sure limit of mN (t) = dN , then the following
sequence has the same almost sure limit:
N
1 X i
(N )
H (E [f (mN (t + 1)) + g(mN (t + 1) mN (t))] + i (t) + i (t)).
N
i=1

By repeating the argument in (3.3), the almost sure limit of (3.4) coincides with that of
N Z
1 X
(N )
H ([f (m(t + 1)) + g(m(t + 1) m(t))] + i (t) + )(d)
N
i=1
Z Z

=
H ([f (x) + g(x y)] + + )(d)N (d) x=m(t+1),y=m(t) , (3.5)

provided (3.5) converges almost surely; but this is guaranteed by the fact that N converges
almost surely, namely to
1 + m(t)
1 m(t)
{1} +
{1} .
2
2
Summing all up,
N
1 X i
(N )
H (E [f (mN (t + 1)) + g(mN (t + 1) mN (t))] + i (t) + i (t))
N
i=1

converges almost surely (along a subsequence) to


1 + m(t)
2

H ([f (m(t + 1)) + g(m(t + 1) m(t))] + + )(d)


Z
1 m(t)
+
H ([f (m(t + 1)) + g(m(t + 1) m(t))] + )(d),
2

which is, therefore, an upper bound for m(t + 1), the limit of the left hand side of (3.2). Taking
the infimum over > 0, we get
Z
1 + m(t)
m(t + 1)
H([f (m(t + 1)) + g(m(t + 1) m(t))]+ + ) (d)
2
Z
1 m(t)
H([f (m(t + 1)) + g(m(t + 1) m(t))] + ) (d)
+
(3.6)
2
= [1 + m(t)] (f (m(t + 1)) + g(m(t + 1) m(t)) + )
+ [1 m(t)] (f (m(t + 1)) + g(m(t + 1) m(t)) ) 1.
A similar argument can be repeated with H Lipschitz, with sup H = H , where H =
H 2{0} is the left-continuous modification of H. Since is a continuous distribution, we have,
for every R,
Z
Z
H ( + + ) (d) =

H ( + + ) (d),

so that we obtain a lower bound for limN + mN (t + 1), which matches the upper bound in (3.6).

Remark 3.2. Suppose the following further conditions are verified:


there exists the limit
lim mN (0) =: m(0) [1, 1] ;

N +

the implicit recursion (3.1), with initial condition m(0) is single-valued up to time T N
{+}.
Under these conditions, by Theorem 3.1, the sequence (mN (t))tT converges weakly to the deterministic, unique solution of this recursion. By a Borel-Cantelli type argument, the result of Theorem
3.1 can be refined in this case, showing that the limit is indeed in the almost sure sense.

Steady states of the limit evolution equation

Equation (3.1) describes the behavior of the system with utility function (1.6) in the limit of
infinitely many players. We are interested in detecting the t-stationary solution(s) of this equation
and in studying its (their) stability properties. We refer to (3.1) as the limit evolution equation of
our system. Notice that (3.1) is an implicit equation of the form G(x, y) = 0, where
G(x, y) := y (1 + x) (f (y)+g(y x)+) (1 x) (f (y)+g(y x)) + 1 .

(4.1)

In this section we consider the problem of existence and local stability for steady states of two
types: fixed points, i.e., those x [1, 1] for which G(x, x) = 0, and 2-cycles, i.e., those pairs
(x, y) [1, 1]2 with x 6= y, G(x, y) = G(y, x) = 0. These steady states emerge as attractors in the
numerical implementations we illustrate in Section 5. Although we cannot rule out other types of
steady states or attractors, theses are the only ones we see in simulations.
8

We, now, show how to describe both fixed point and 2-cycles as fixed points of a suitable
map. We introduce a set of minimal assumptions which guarantee differentiability of the maps we
consider below. It is plausible that these assumptions can be weakened at the cost of some further
technicalities.
Assumption 4.1.
A.1 The distribution function of random terms, as introduced in (1.5), is absolutely continuous,
with a continuous, strictly positive density.
A.2 The map g : (2, 2) R is of class C 1 , odd, with g 0 (z) > 0 for every z (2, 2).
A.3 Assume f 0.
Note that, in principle, x and y may vary in [1, 1], so that g should be defined on [2, 2].
However, it follows from (3.1) that y (1, 1) whenever x (1, 1). In particular, fixed points
and 2-cycles are in (1, 1).
We, now, define an auxiliary function : R R that is used in the sequel.
(R) :=

(R + ) + (R ) 1 g 1 (R) [(R + ) (R )]
,
1 [(R + ) (R )]

(4.2)

where g 1 : Im(g) (2, 2) is the inverse of g. The map g 1 is well defined thanks to Assumption
A.2. Moreover, is well defined too, since, by assumption A.1, has a strictly positive derivative,
hence
(R + ) (R ) < 1, for every R R.
Theorem 4.2. On the open set D := {R R : (R) (1, 1)}, define the map
(R) := g(2(R)),

(4.3)

where is as defined in equation (4.2). Then, under Assumption 4.1, the set of fixed points and
2-cycles for (4.1) is characterized as the set of fixed points of the map . In particular,
i) x (1, 1) is a fixed point for G if and only if x = 0. The fixed point corresponds to
R = 0, which is always a fixed point for .
ii) G admits a 2-cycle (x , y ) if and only if there exists R D \ {0} such that (R ) = R . In
this case y = (R ) and x = (R ).
Proof. Define R = g(y x). A fixed point x for (3.1) (rewritten as in (4.1) with f 0) is such
that
x = x [(R + ) (R )] + (R + ) + (R ) 1
R = g(x x) = 0.

(4.4)

Setting R = 0 in the first equation of (4.4), we get x = 0. This proves that x = 0 is the only fixed
point for (3.1) and, since R = 0, it also corresponds to the zero value fixed point for the map
(note that (0) = 0).
9

We are now left with point (ii). Note that the equation G(x, y) = 0 with G as in (4.1) and f 0,
can be rewritten as
y = x [(R + ) (R )] + (R + ) + (R ) 1
R = g(y x) = 0.

(4.5)

Solving for x the second equation in (4.5) and inserting the solution x = y g 1 (R) in the first,
we obtain the following alternative formulation for the implicit limit dynamics:
y = (R)
R = g(y x).

(4.6)

Suppose there is a 2-cycle (x , y ). Then, by (4.6), there exists R such that


y = (R )
R = g(y x ),

(4.7)

and
x = (R ) = (R )
R = g(x y ),

(4.8)

which, immediately, give R = (R ).


Viceversa, assume R = (R ), with (R ) (1, 1).
Setting y := (R ), x := (R ), the triple (x , y , R ) solves (4.7) and (4.8), so (x , y ) is a
2-cycle.
Theorem 4.2 shows that the only steady state equilibrium corresponds to half of the players
choosing = 1. Moreover, it provides conditions for the existence of a 2-cycle. We, now, discuss
the local (linear) stability of the fixed point and of the 2-cycle.
Proposition 4.3.
i) The fixed point x = 0 is linearly stable if and only if 0 (0) < 1.
ii) Let R D \ {0} such that (R ) = R . Assume 0 (R ) 6= 2. Then the 2-steps dynamics are
well defined in a neighborhood of x and the 2-cycle (x , y ) is locally linearly stable if and
only if 0 (R ) < 1.
Proof. We start by proving point (ii). Given a 2-cycle (x , y ), corresponding to the fixed point
R for , consider a small perturbation of equations (4.7) and (4.8):
y  = (R )
R = g(y  x ),

(4.9)

and
x = (Q )
Q = g(x y  ).
10

(4.10)

Computing, by implicit differentiation, x0 :=


0

x =

d 
d x |=0 ,

we easily obtain

0 (R )g 0 (y x )
1 0 (R )g 0 (y x )

2
.

Note now that


0 (R ) = 2g 0 (2(R ))0 (R ) = 20 (R )g 0 (y x ),
giving
x0 =

0 (R )
2 0 (R )

2
.

This both shows that the implicit map is locally well defined if 0 (R ) 6= 2 and that local linear
stability, i.e., |x0 | < 1, is equivalent to 0 (R ) < 1.
The proof of (i) can be derived similarly, perturbing only (4.7) in a neighborhood of x = 0.
The attractors of the dynamics we have described are a fixed point and a 2-cycle. In the next
section, we exploit, by means of an example and of some numerical simulations, the stability of the
fixed point and existence and stability of the 2-cycle. In particular, we discuss the possibility of
coexistence of a linearly stable 2-cycle with the stable fixed point.

A significant example: the linear model

We, now, specify the function g, defined in (1.6), as follows:


g(y x) = k (y x),

(5.1)

where k > 0 is a given constant. Note that, being R linear in (y x), it trivially satisfies A.2 of
Assumption 4.1. We, also, specify the distribution of the noisy component in the utility assuming
a logistic distribution as already exemplified in (1.5). The choice of logistic error terms is rather
common in many models of evolving social systems (see, for instance, [5] and [11]). Note that the
logistic distribution satisfies A.1 in Assumption 4.1. The parameter measures the impact of the
random component in the decision process.
We, now, characterize the attractors for the limit evolution equation (3.1) under the new specifications. Note that in this context g(z) = kz, so the map , as defined in (4.3), reads
(R) = 2R

(R + ) (R )
(R + ) + (R ) 1
+ 2k
.
1 [(R + ) (R )]
1 [(R + ) (R )]

(5.2)

Before stating the main result of this section, we prove a technical lemma, that will be used in
Proposition 5.2 to deal with possibly neutrally stable 2-cycles.
Lemma 5.1. Define K as the set of k R+ such that there exists R R for which (R) = R and
0 (R) = 1. Then K contains is locally finite, i.e., its intersection with any bounded set is finite.

11

Proof. Note that the function in (5.1) is of the form (R) = A(R) + kB(R), so that 0 (R) =
A0 (R) + kB 0 (R). Now, the identities (R) = R and 0 (R) = 1 imply,
k=

R A(R)
1 A0 (R)
=
.
B(R)
B 0 (R)

(5.3)

By direct inspection, using the explicit logistic form of , it is easily seen that A and B are bounded
functions and B is bounded away from zero. Then, the second equation in (5.3) admits at most a
finite number of solutions, since all functions involved are real, analytic and non constant. Now,
0 (R)
k = 1A
B 0 (R) . Thus, if we restrict to values of k in a bounded interval (0, M ], with M > 0, we
see that the first equation of (5.3) may be satisfied only for R in a bounded interval (aM , aM ).
Moreover, the second equation in (5.3) is the equality between two real analytic functions which are
not identically equal. So, it may admit only a finite number of solutions in (aM , aM ). Plugging
these solutions in the first equation of (5.3), we obtain only a finite number of possible values of k.
We have, therefore, shown that K (0, M ] is finite, for every M > 0.

1
Proposition 5.2. Define kuc = 4
1 + e2 and let K be the locally finite set defined in Lemma
5.1.
i) The fixed point x = 0 is linearly stable if and only if k (0, kuc ).
ii) A 2-cycle exists if and only if k > klc , where klc is a critical value satisfying klc kuc . Moreover,
for every k (klc , +) \ K, there exists a linearly stable 2-cycle.
iii) If = 0, then klc = kuc . Therefore, no coexistence of stable fixed point and 2-cycle is possible.
1
If, instead, > 2
log 2, then klc < kuc . In this case, coexistence holds for k (klc , kuc ).
Proof. We study the fixed point and 2-cycles by means of Proposition 4.3. By a simple calculation,
we find that
2k 0 () 2() + 1
0 (0) =
.
1 ()
Hence

()
1 
0 (0) < 1 k < 0
k <
1 + e2 ,
(5.4)
2 ()
4
which completes the analysis of linear stability of the fixed point.
For the existence and stability of 2-cycles, we begin by observing that, in this model, R = g(yx)
takes values in (2k, 2k). If R (2k, 2k) is such that (R) = R, then, using the notations of
Proposition 4.3,
R
2(R) =
(2, 2),
k
or, equivalently, (R) (1, 1). Thus R D, the domain of the map . Therefore, taking also
into account that is an odd function, we conclude that a 2-cycle exists if and only if has a fixed
point in (0, 2k). Being (R ) and (R + ) (R ) [0, 1) for every R, we have


k [(2k + ) + (2k ) 1] 2k [(2k + ) (2k )]
(2k) = 2
1 [(2k + ) (2k )]


1 (2k )
= 2k 1 2
< 2k .
1 [(2k + ) (2k )]
12

By continuity we obtain the following facts.


Fact 1. A fixed point R (0, 2k) for (i.e., a 2-cycle) exists if and only if there is R (0, 2k)
for which (R) R.
To discuss linear stability of 2-cycles, it is convenient to establish a refinement of Fact 1.
Fact 2. Suppose there is R (0, 2k) for which (R) > R and that k 6 K. Then a linearly stable
2-cycle exists.
Fact 2 is proved as follows. If there exists R (0, 2k) for which (R) > R, then, being (2k) < 2k,
the graph of must cross the graph of the identity, in a point R for which 0 (R) 1. The case
0 (R) = 1 is ruled out, since k
/ K.
Having proved Facts 1 and 2, we are now ready to analyze existence and stability of 2-cycles.
To begin with we remark that, for = 0, (R) = 2k(R), which is a strictly concave function for R > 0. By concavity, a strictly positive fixed point exists if and only if 0 (0) > 1, i.e.,
when the fixed point is unstable. In this case, the stable fixed point cannot coexist with 2-cycles.
When > 0, the situation is more complex, but can be clarified by the following simple statements.
Fact 3. 2-cycles persist by increasing k.
For a given k > 0 and R > 0, suppose (R) R > 0. Looking at (5.2), it can be seen that,
necessarily,
(R + ) + (R ) 1
> 0,
1 [(R + ) (R )]
and so (R) increases in k. Thus, the inequality (R) R persists by increasing k.
Fact 4. If 0 (0) > 1 or 0 (0) = 1, 00 (0) = 0 and 000 (0) > 0, then a 2-cycle exists.
Indeed, a Taylor expansion of around 0 shows that (R) > R for R > 0 small enough. By
some straightforward computations, we get 00 (0) = 0 and
000 (0) =
Now, set k = kuc :=

()
2 0 ()

1
4

2k [ 000 ()(1 ()) + 3 0 () 00 ()] 3 00 ()


.
(1 ())2

1 + e2 , which implies 0 (0) = 1 and

000 (0)|k=kc > 0 () 000 () 3 0 () 00 () > 0


u

meaning that
8 3 e4

e4 e2 2
> 0 e2 > 2 .
(e2 + 1)5

Then 000 (0) > 0 whenever 2 > log 2. In this case, for k = kuc , there exists R > 0 with (R) > R.
By continuity, the existence of such R is preserved by a small decrease of k. In other words, 2-cycles
exist for all k > klc , where klc < kuc : in this case 2-cycles may coexist with the stable fixed point.
13

1
Remark 5.3. In Proposition 5.2, we show that > 2
log(2) is a sufficient condition for coexistence
1
of the fixed point and a locally stable 2-cycle. Our proof does not cover the case 0 < 2
log(2).
Numerical simulations suggest that there is no coexistence in this range, so we conjecture klc < kuc
1
if and only if > 2
log(2).

Relying on Proposition 5.2, we, now, briefly analyze how the picture of the stationary regime
changes depending on the parameters of the model. In particular, we are able to discuss coexistence
of stable fixed point and 2-cycles. For = 0, no coexistence of the stable fixed point and a stable
1
2-cycle is possible: for k < kuc = 2
, the fixed point is linearly stable; for k > kuc , the stability
of the fixed point is lost and a linearly stable 2-cycle arises. Introducing frictions, coexistence of
the two attractors becomes possible.
In this case the two possibly different thresholds klc and kuc ,

1
with klc kuc = 4
1 + e2 , separate the stability regions of the two attractors. Eventually, for
1
> 2 log 2, the stable fixed point and a stable 2-cycle coexist. In Figure 1, we plot klc and kuc
as functions of , for = 0 (Panel A) and for = 0.5 (Panel B). Note that, for = 0, klc = kuc .
In Panel B, the curve 7 klc () has been obtained numerically. Since kuc + as +,
the introduction of a friction term strongly stabilizes locally the fixed point at low noise, without
necessarily loosing the stability of a 2-cycle. In the coexistence region, the typical behavior is
the following: an unstable 2-cycle separates the domains of attraction of the fixed point and of the
stable 2-cycle. As increases, the domain of attraction of the fixed point grows and the oscillation
|y x| at the stable 2-cycle shrinks. This picture is well supported by numerical evidence.
We conclude this section, discussing the role of . If we let 0+ , i.e., the error dominates,
only the region of fixed point survives. This is due to the fact that, when 0+ , the agents are
completely randomizing their choice. Hence, in the asymptotic model with infinite agents, half of
them chooses = 1 and half = 1. The optimal participation rate is, therefore, always equal to
0. When increases, the impact of the noise component shrinks and the agents are more prone to
extreme behaviors: the trend-dependent term in the utility prevails over the noise.

5.1

Simulations: the dynamics with a finite number of players

As already noticed, coexistence is one of the most significant results of this model. In particular,
it has important consequences at the level of the finite dimensional system. We perform some
agent-based simulations in order to capture this aspect. More in details, we simulate a large, but
finite, population of N agents. At any time step, we let the N agents play (sequentially) their
best response to the (fixed) actions of the other agents. We continue by letting the algorithm work
unless a fixed profile is reached. In doing this, we are numerically identifying a Nash equilibrium
as a strategy profile , that is a fixed point of the best response map. In the case of multiple Nash
equilibria, in any simulation the algorithm identifies (randomly) one among them.
In the case of coexistence, the finite dimensional system exhibits a regime switching phenomenon.
Suppose N is large. The finite system tends to stay close to the infinite one; in particular, it gets
attracted by one of the attractors, determined by the initial condition. After some time, a large
random fluctuation occurs in the finite system, leading the aggregate variable mN to fall into the
basin of attraction of a different attractor, where the system will stay until the next large fluctuation.
The waiting time to the next large fluctuation has a distribution close to the exponential one, with
a mean that grows exponentially in N . Despite of the exponential growth, this regime switching
14

is clearly visible even for N of the order of the thousands. In the context of statistical mechanics
models, this phenomenon is often called metastability and it is well understood for some simple
models (see, e.g., [2]). In Figure 2 we plot two evolutions of mN in the coexistence region; the two
evolutions differ in the initial condition, that in the infinite system would lead to different steady
states.

Concluding remarks

We propose a mean-field dynamic model, where the dynamics of the system are trend-driven, in
the sense that the rates of transition depend on the variation of the aggregate variable (the trend),
thus introducing an endogenous relation between the state of the systems at two subsequent dates.
This feature induces natural and non trivial dynamics that have been formally studied.
Thinking about the system as a social network, we are modeling agents facing bounded rationality: when deciding their action, they rely on random utilities (see [5]) characterized by a noisy
component. Differently from the majority of the present literature in this field, we let the agents
update their opinion in a parallel way, i.e., their action is the consequence of a game whose payoffs
depend on the expectations on the behavior of the population.
Notice that, owing to the assumption of agents simultaneous updating with a trend-driven
component, at the equilibrium the limiting dynamics converge either to a fixed point or to a 2cycle. Moreover, because of the strategic behavior of the agents, the two limiting attractors coexist
for some values of the parameters; this seems to be a novelty in probabilistic models that describe
social interactions and, in case, contagion. See, for instance, [3], [6] or [7], which are based on
models relying on agents, who update their choices sequentially without any trend component: the
stable attractors can be only fixed points.
Acknowledgments
Special thanks go to Fulvio Fontini for the fruitful discussions. The authors, also, thank Roberto
Casarin, Gustav Feichtinger, Marco LiCalzi, Antonio Nicol`o and Paolo Pellizzari. The authors acknowledge the financial support of the Research Grant of the Ministero dellIstruzione, dellUniversit`
a
e della Ricerca: PRIN 2008, Probability and Finance, and PRIN 2009, Complex Stochastic Models
and their Applications in Physics and Social Sciences. We are responsible for all the remaining
errors.

References
[1] Barucci, E., Tolotti, M.: Social interaction and conformism in a random utility model. J. Econ.
Dyn. Control 36(12), 1855-1866 (2012)
[2] Bianchi, A., Bovier, A., Ioffe, D.: Sharp asymptotics for metastability in the random field
Curie-Weiss model. Electron. J. Probab. 14(53), 1541-1603 (2009)
[3] Blume, L., Durlauf, S.: Equilibrium concepts for social interaction models. Intern. Game
Theory Rev. 5(3), 193-209 (2003)

15

[4] Borkar, V.S.: Probability Theory: an advanced course. Springer-Verlag, New York (1995)
[5] Brock, W., Durlauf, S.: Discrete choice with social interactions. Rev. Econ. Stud. 68(2), 235260 (2001)
[6] Collet, F., Dai Pra, P., Sartori, E.: A Simple Mean Field Model for Social Interactions:
Dynamics, Fluctuations, Criticality. J. Stat. Phys. 139(5), 820-858 (2010)
[7] Dai Pra, P., Runggaldier, W.J., Sartori, E., Tolotti, M.: Large portfolio losses: A dynamic
contagion model. Ann. Appl. Probab. 19(1), 347-394 (2009)
[8] Dawson, D.A., G
artner, J.: Large deviations for the McKean-Vlasov limit for weakly interacting diffusions. Stochastics 20(4), 247-308 (1987)
[9] Garnier, J., Papanicolaou, G., Yang, T.-W.: Large deviations for a mean field model of systemic risk. arXiv: 1204.3536 [q-fin.RM] (2012)
[10] Gartner, J.: On the McKean-Vlasov limit for interacting diffusions. Math. Nachr. 137(1),
197-248 (1988)
[11] Nadal, J.P., Phan, D., Gordon, M.B., Vannimenus, J.: Multiple equilibria in a monopoly
market with heterogeneous agents and externalities. Quant. Finance 5(6), 557-568 (2005)

16

Panel A Phase Diagram No frictions ( =0 )


10

ku
9
8
7

6
5
4

3
3
2
1

1
0

Panel B Phase Diagram Frictions ( >0 )


10

kl

ku

8
7

6
5
4

3
3

2
2
1

1
0

Figure 1: Phase diagram of the parameters of the model. On Panel A we put = 0, on Panel B
= 0.5. In both panels we denote by 1 the region, where only the fixed point x = 0 is stable. In
region 2, we have coexistence of the stable fixed point and of a stable 2-cycle. In region 3, only the
2-cycle is stable.
17

Participation rate (k=1.8,=1,=0.5,m =0.6)


0

Asymptotic
N=500

0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1

50

100

150

200

250

300

350

400

Time

Participation rate (k=1.8,=1,=0.5, m0=0.2)


1

Asymptotic
N=500

0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1

50

100

150

200

250

300

350

400

Time

Figure 2: Asymptotic regime (2-cycle above, fixed point below) for the optimal participation rate
(red dotted line) and finite dimensional simulation with N = 500 agents (blue continuous line).
Starting points are m0 = 0.6 (above) and m0 = 0.2 (below).
18

You might also like