Professional Documents
Culture Documents
Mirta Galesic1,2*, Henrik Olsson1, Jonas Dalege1, Tamara van der Does1, & Daniel L.
Stein3,4
1
Santa Fe Institute, Santa Fe, New Mexico, 2Complexity Science Hub Vienna,
Austria, 3Department of Physics and Courant Institute of Mathematical Sciences, 4NYU-
ECNU Institutes of Physics and Mathematical Sciences at NYU Shanghai, Shanghai,
China. *Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501. E-mail:
galesic@santafe.edu
BELIEF DYNAMICS FRAMEWORK 2
Abstract
Belief change and spread have been studied in many disciplines-- from psychology,
sociology, economics and philosophy, to computer science and statistical physics-- but we
still do not have a firm grasp on why some beliefs change and spread easier than others. To
fully capture the complex social-cognitive system that give rise to belief dynamics, we
need to integrate the findings of these disciplines into a single framework. Communication
between disciplines is limited, and there is a lack of theoretical comparisons and empirical
tests of the many different models of belief dynamics. Here we first review insights about
structural components and processes of belief dynamics studied in different disciplines,
focusing particularly on previously neglected but important areas such as cognitive
representations and strategies used to integrate information. We then outline a unifying
framework that enables theoretical and empirical comparisons of different belief dynamic
models. The framework is quantified using a statistical physics formalism, grounded in
cognitive and social theory as well as empirical observations. We show how the framework
can be used to integrate numerous previous models and develop a more comprehensive
science of belief dynamics.
Keywords: belief dynamics models, model comparison, statistical-physics,
theoretical framework
BELIEF DYNAMICS FRAMEWORK 3
Why do individuals change some beliefs quickly in light of new information while
they fiercely resist changing other beliefs? Why do some beliefs spread faster than others in
societies? Why do societies sometimes come to consensus about an issue and at other times
splinter in clusters with widely different beliefs? Questions about belief dynamics have
long occupied scholars across a wide range of disciplines and have been studied on
individual and societal levels. At the individual level, researchers in psychology and
cognitive science, evolutionary anthropology, political science, and philosophy have been
studying cognitive processes that underlie belief dynamics, including individual and social
learning, social judgment and decision making, social influence, and belief formation and
change. At the societal level, researchers in disciplines ranging from sociology and
economics to statistical physics and computer science have been focusing on the spread of
beliefs in social networks, and the two-way relationship between the structure of these
networks and people’s beliefs.
While these efforts considerably improved our understanding of belief dynamics, a
comprehensive grasp of processes and outcomes of belief dynamics and how they can be
influenced is still elusive. This goal has been elusive for at least three reasons. First, while
most researchers agree that beliefs form and change through a dynamic interplay of social
network structure and cognitive processes, most disciplines focus predominantly on only
one of these aspects. This makes it difficult to adequately capture the underlying social-
cognitive complex system, leading to crucial gaps in our understanding of belief dynamics.
For example, while models of belief dynamics can predict the emergence of full consensus
or strong polarization 1 , they generally cannot predict for which beliefs a society will reach
consensus and for which beliefs polarization will increase. Similarly, these models often
cannot predict why some beliefs are more or less easy to change. Here we propose that the
key to understanding these puzzles might be to integrate social and cognitive components
of the complex system underlying belief dynamics into a single unifying framework, as
illustrated in Figure 1.
The second reason for a lack of understanding of belief dynamics is that there is
little theoretical and empirical comparison of different models, both within and especially
between disciplines (for a rare exception, see 2). Many models are developed on an abstract
level making empirical measurement difficult, or they remain untested because they are
BELIEF DYNAMICS FRAMEWORK 4
developed within fields that do not traditionally conduct empirical studies. Coupled with
relatively scarce communication between different fields, this leads to relatively slow build
up to fundamental principles of belief dynamics. We therefore review insights from
different disciplines organized according to the structural and process components of belief
dynamics identified in Figure 1. We go beyond several excellent reviews that focus mostly
on developments within a single discipline 1,3–6 and follow examples of reviews that cross
interdisciplinary boundaries 7–10. In particular, we review research efforts in areas that are
essential to understanding the complex system underlying belief dynamics but are typically
not considered in this literature, such as the work of on cognitive representations of beliefs
and social networks, and on the many simple strategies people have been shown to use to
integrate information about their own and others’ beliefs.
Third, there is a lack of quantitative modeling frameworks that could be used to
adequately integrate both cognitive and social network aspects and make empirically
testable predictions about belief dynamics. Typically, models rich in cognitive and social
detail tend to be formulated verbally, while quantitative models tend to lack sufficient
realism. Agent based models are well positioned to fill this gap 11–14. Here we propose a
quantitative framework that is rich in cognitive and social detail yet also simple enough to
provide quantitative predictions for integrating these details. The framework integrates the
main structural and process components of belief dynamics and enables systematic
theoretical and empirical comparisons of different implementations (Box 1 and Figure 2).
BELIEF DYNAMICS FRAMEWORK 5
1. Structural Components
Structurally, the complex system underlying belief dynamics consists of two main
components: individual beliefs and their connections, and people’s social contacts and the
relationships between them.
1.1. Individual Beliefs
Individual beliefs are an essential component of belief dynamics, but there is little
consensus on how they should be represented and measured. Here we adopt an inclusive
definition of beliefs that encompasses conceptualizations used in different fields, including
beliefs as assumptions about states of the world (e.g., “vaccination is safe for children”, “I
BELIEF DYNAMICS FRAMEWORK 6
believe there is a 30% chance of rain tomorrow”), views on moral and political issues (e.g.,
“genetically modified foods are against nature”, “a free market is good for everyone”),
evaluations or cognitive aspects of attitudes (e.g., “cats are fun”, “this politician is
trustworthy”; 15,16), or as own preferences (e.g., “I prefer having more time to earning more
money”, “I like summers more than winters”). Support for a belief can be measured on
different scales, from dichotomous true-false or agree-disagree, to rating scales with several
points, to subjective probabilities and probability distributions. Different domains
emphasize different ways of measurement: rating scales are often used in social psychology
and political science 17,18, while subjective probabilities are relatively more often used in
economics 19–21, philosophy 22,23, artificial intelligence 24, and judgment and decision
making 25–28. Some theories in the attitude literature combine both evaluations of beliefs
and their subjective probabilities 29–32.
Extant models of belief dynamics typically consider changes of a single belief over
time, but beliefs are associated with other beliefs which develop and change together, thus
forming a belief network. Although the idea of belief networks has a long history in
psychology 33,34, formal theories that can provide predictions of how the network structures
of individual beliefs affect belief dynamics and related behaviors have only been developed
relatively recently 35–37. For example, Dalege and his colleagues 38,39 have developed a
quantitative model of belief networks that underlie people’s attitudes towards different
issues. They use a statistical physics framework to formalize the dynamic interactions
between different beliefs related to a particular attitude object. They show that the structure
of belief networks can have important consequences for the likelihood of belief change
(e.g., strongly connected belief networks are more stable than weakly connected belief
networks; 40). Other models combine the structure of the internal belief network and the
social network. For example, 41 developed a model of belief dynamics where each
individual is endowed with a network of interacting beliefs that changes through interaction
with other individuals in a social network.
1.2. Social Networks
Information from people’s social environment is an important contributor to the
complexity of belief formation and change. The relevant social environments can include
many different social contacts, from friends and family, online communities, media figures,
BELIEF DYNAMICS FRAMEWORK 7
to companies, educators, health workers, and politicians. They are embedded in ever
evolving social networks that to a large part determines what information is spread and to
whom.
The importance of social networks for understanding how beliefs change has long
been acknowledged, with Durkheim highlighting already in 1893 42 the need to study
individuals within their social networks. Moreno introduced the first network visualizations
and methods for measuring them in 1951 43. Since then, a large number of network
properties important for belief dynamics have been identified (for an overview of early
work, see 44; and for recent work 5,45–48. Some of the most frequently studied properties are
various measures of centrality (i.e., structural importance) of individuals, strength and
length of paths between them (e.g., how long does it take on average for information to
flow from one individual to another individual in the social network), balance of
relationships in which they are embedded (e.g.., are two friends of an individual generally
also friends?), and homophily (i.e., the tendency to have friends who hold similar beliefs).
For example, homophily can constrain the spread of beliefs and behaviors such as smoking,
obesity, and even divorce 49–51, and the resulting beliefs can influence processes such as
network updating that in turn influence homophily 52. A related concept is echo chambers
in which beliefs are amplified or reinforced by communication to similar others and
repetition of what one already believes 53,54.
2. Process Components
The same structure of a complex social system can give rise to different belief
dynamics: Differences in cognitive and social processes governing people’s representation
of beliefs and social networks, their cognitive and social dissonance, and the way they
update their beliefs and network links can all contribute to different belief dynamics. In the
following sections, we will discuss each of them in turn, even though in reality these
processes often occur in parallel and influence each other.
2.1. Representing Beliefs and Social Networks
Most models of belief dynamics assume that people have stable and accurate
representations of their own beliefs and beliefs of their social contacts. However, research
in cognitive psychology suggests that people often do not have a readily formed belief
about an issue, but must (re)construct it on the spot based on different considerations that
BELIEF DYNAMICS FRAMEWORK 8
come to mind 55,56. These personal considerations can include other related beliefs, values,
assumptions, and preferences one has related to the issue at hand. Specific considerations
come to mind depending on processes of memory, such as priming and forgetting,
processes of self-deception 57,58, and the inter-relationships between one’s beliefs.
Similarly, people might not have a ready-made representation of what others in their social
networks believe. These representations have to be formed by integrating available social
samples and are affected by memory processes 59,60 as well as by one’s social contacts’
impression management 61 and other signaling strategies 62–65.
The many relevant personal considerations and perceived beliefs of social contacts
need to be integrated into an overall representation of one’s own and others’ beliefs about
an issue. Many disciplines study integration of either personal considerations or
information from social networks, but rarely both, although they might involve similar
processes.
Some approaches to studying integration processes are primarily normative,
suggesting how integration should be done to achieve a coherent set of beliefs. A standard
normative approach is Bayesian learning, which assumes that individuals update their
beliefs optimally given an underlying model of the world that includes their own and
others’ beliefs, in line with Bayesian calculus. Bayesian approaches to information
integration have been intensively explored in artificial intelligence 24, cognitive science
66,67
, and economics 20,68. In addition, several logical approaches to information integration
have been developed in the context of belief dynamics 69–71. In dynamic epistemic logic
70,71
, the updating of knowledge and beliefs takes place through plausibility models with
which individuals consider different worlds as possible candidates for the actual world,
where each world has a plausibility order in relation to the other worlds. In its simplest
form, belief revision occurs by removing non-plausible worlds from the total set of worlds.
Some approaches combine logical integration and specific social network structures 72,73.
These theories include information about what others know and believe and analyze what
brings about knowledge and belief change.
Other approaches to studying integration processes are primarily descriptive,
aiming to explain cognitive strategies. These approaches are motivated by observations that
people tend to use relatively simple strategies that can work with limited information and
BELIEF DYNAMICS FRAMEWORK 9
cognitive capacity, sometimes approximating the results of normative ideals 5,21,74–77. One
prominent class of strategies are frequency-dependent strategies. Individuals following
these rules update their beliefs to those supported by a certain number or fraction of their
social contacts, or of their personal considerations about an issue. The most studied rule in
this class is the majority rule 78–82 and its plurality variants (for more than two options) that
can be used both for integrating one’s own considerations (e.g., tallying, 83,84) and external
social information 85. Across many disciplines, rules using other thresholds have been
studied empirically and theoretically, including minority rule, unanimity rule, and others
(in political science, 78,79; economics, 86, sociology, 81,87,88; law, 89, organizational science,
90
; evolutionary anthropology, 91; animal learning, 92; psychology, 82,93–96; machine learning
e.g., 97, and statistical physics, 80,98).
Another prominent class of descriptive approaches for information integration are
averaging strategies (and the closely related weighted additive rules). In psychology, they
have been studied as rules for integrating personal considerations about an issue using
different weighting schemes 84,99, as well as for integrating social information in belief
updating 100, advice taking 101, and social judgment 100,102. In other disciplines they have
been studied mostly for integrating social information. For example, DeGroot updating 21
and its variants 103 studied in sociology and economics involve weighted averaging of one’s
own and neighbors’ beliefs 51,104–107. Several rules studied in statistical physics, such as
voter rule, Ising and Potts models, etc., are essentially averaging rules as well 108–110.
Random updating or “unbiased” copying that is often used as a benchmark in evolutionary
anthropology is also an averaging rule in the long run 111. Epidemiological models of belief
dynamics, where individuals are described with a certain probability of getting “infected
with” and “recovering from” a belief can also be thought of as examples of averaging 112.
A final prominent group of descriptive approaches to information integration are
various non-compensatory strategies. Unlike frequency-dependent and averaging rules,
they do not take into account all personal considerations or social contacts, but choose just
one or a smaller subset based on particular properties of these considerations or contacts.
For example, when people follow a take-the-best rule, they rely on a single best
consideration, disregarding all others 113. Similarly, they might disregard other
considerations when following memory-based cues such as recognition 113, perceptual or
BELIEF DYNAMICS FRAMEWORK 10
spread. Its main advantage is that it can express a large number of different models and
reproduce a variety of empirically observed patterns of belief spread, using only a few
components. Each of its variables and parameters can be either directly observed or
estimated from empirical data, and can be given a plausible psychological meaning
reflecting factors that have been recognized as important for belief dynamics. In what
follows, we outline how our framework incorporates the main aspects of belief dynamics
reviewed in the previous sections.
3.1. Structural Components
Individual Beliefs. We adopt a widely used statistical physics approach 1, where the
focal belief under consideration by individual i is represented by a discrete-valued variable
σ𝑖 that is traditionally (for historical reasons) called a spin. Depending on the issue in
question, the variable σ𝑖 can take anywhere from two (yes/no, agree/disagree) to many
(e.g., ranging from strongly agree to strongly disagree) values corresponding to different
belief states. Belief states can be measured by attitude questions or approximated from
behaviors (e.g., voting or purchases). The spin σ𝑖 is usually treated as a scalar variable
which can change its state among a discrete or continuous set of possibilities over time, but
at any given time is found in one state only. We introduce a more general vector approach
(See Box 1), which includes the usual scalar spin models as a special case. Vector models
of cultural dynamics have been studied by others 125, but the nature and implementation of
those are different from that discussed here. Our conceptualization of beliefs is also similar
to quantum and Markov models of belief change during evidence accumulation described
by Busemeyer and colleagues 146. They, however, do not include the social network
component and are typically applied to relatively short time scales.
Consider an example of our general vector approach in which an individual fills out
a survey and is asked to choose between five possible belief states regarding the issue in
question: strongly agree, agree, neutral/don't know, disagree, strongly disagree. We define
a vector σ
⃗ 𝑖 with (in this example) five components that describe probabilities or weights
which are assigned to each of the belief states — so the sum of all components of σ
⃗ 𝑖 at any
time is equal to one. For example, we might answer “agree” to a survey question, but might
feel our true belief state is somewhere in between “agree” and “strongly agree”, in which
case the vector (0.25,0.75,0,0,0) is a more accurate representation. This framework
BELIEF DYNAMICS FRAMEWORK 13
includes standard scalar-spin statistical physics models as a special case, in which σ𝑖 can
still be in only one of five possible states, now written as (1,0,0,0,0), (0,1,0,0,0),
(0,0,1,0,0), (0,0,0,1,0), and (0,0,0,0,1). Depending on how fine-grained the different
choices are, the framework can accommodate both discrete (or categorical) and continuous
representations of beliefs. For example, one’s beliefs about the safety of genetically
modified food can be described as a probability of choosing each of the points on a scale
from very unsafe to very safe; and one’s beliefs about different political options might be
described as 20% support for party A and 80% for party B.
Social Networks. Individuals are represented by nodes on a social network.
Weighted links connect two nodes, i and j (or neighbors) and represent the possibility that
information can flow between the two. The strength and sign of the interaction between
two nodes is denoted by a positive or a negative link weight. A positive link weight
between two neighbors indicates a tendency for individuals 𝑖 and 𝑗 to want to agree, while
a negative coupling indicates that a change in 𝑖’s belief induces a change in the opposite
direction in 𝑗. Moreover, the graph can be directed, meaning that the communication
between two individuals is only one-way or else two-way but with different link weights in
each direction. These social networks can be measured objectively using sociometric
methods or subjectively using people's perceived associations between their social contacts.
3.2. Process Components
Representing Beliefs and Social Networks. Starting from some initial belief states,
beliefs evolve in time according to specific dynamical rules. These rules depend on
interactions with both the internal cognitive beliefs that individuals hold and with the
perceived beliefs of others around them. As mentioned, the focal belief under consideration
is represented by the vector σ
⃗ 𝑖 The framework also includes other internal beliefs related to
⃗ 𝑐𝑜𝑔 , as well as perceived
that focal belief, which are assumed to form one’s cognitive field ℎ 𝑖
preferences. These strategies can be probed directly through survey and experimental
measurement, or can be inferred by comparing predictions of models implemented in the
framework.
Cognitive and Social Dissonance. When a belief does not fit well with one’s
cognitive and/or social field, one can experience cognitive and/or social dissonance. In our
𝑐𝑜𝑔
framework, these dissonances, 𝑑𝑖 and 𝑑𝑖𝑠𝑜𝑐 correspond to the squared difference between
probability (or support) distributions that are currently assigned to different beliefs. This
difference is then used to calculate a new belief distribution that would best minimize the
dissonance, using a Boltzmann entropy framework (equivalent to the softmax function
often used in decision theory, Box 1).
While attempting to minimize dissonance, individuals can experience a tension
between minimizing their cognitive and social dissonance 34,126. For different individuals
and issues, one or the other source of dissonance can be more important. In our framework
this is represented by the parameter w, which weights the social relative to the cognitive
dissonance (Box 1).
The effect of dissonance on the evolution of beliefs is greatest for those who have
the most certainty about their own beliefs and those of their neighbors. In our framework,
this certainty is modeled by the (inverse of the) parameter β. This parameter captures
random error in updating processes that can occur in at least three different stages of belief
dynamics. People can be distracted and make mistakes in perceiving their own and others’
beliefs. They can also over- or under-estimate the dissonance between their different
beliefs. And they can waver in their beliefs and not adhere completely to achieving the
lowest dissonance state possible at every time step. All of these errors result in a certain
amount of noise and unpredictability. This is represented as temperature in statistical
physics models, with zero temperature corresponding to the complete absence of noise. In
our model, temperature corresponds to 1/β (Box 1). The higher the temperature (i.e., the
smaller β is), the greater the level of uncertainty and therefore the less of an effect
dissonance has on belief updating.
Both parameters w and β and can be estimated from the data or approximated by
empirical measurements of subjective relative importance of staying aligned with one’s
own or with one’s social network’s beliefs (for w), and of attention and subjective certainty
BELIEF DYNAMICS FRAMEWORK 15
(for β).
Belief and Network Updating. Dissonance can be reduced by modifying one's own
beliefs, but it can also be alleviated by changing or cutting links to beliefs and individuals
that disagree with one's firmly held belief. Our framework allows for modeling different
processes of belief and network updating, which can include random rewiring of edges,
rewiring based on some threshold of similarity between any two spins, rewiring aimed at
preserving balance in triads of spins or the overall network, and other processes.
3.3. Implementing and Testing Models Within the Framework
Models formulated within the framework can reproduce commonly observed
outcomes of belief dynamics as well as empirically measured beliefs about diverse real-
world issues. Figure 2A shows predictions of simple models that reproduce several
common phenomena studied in belief dynamics models: consensus, clustering, and
polarization. What ultimately matters most is how well these models explain empirically
observed belief trends. Figure 2B and 2C show results from two empirical studies
investigating beliefs people have about genetically modified (GM) food and their political
preferences. Our framework enables comparison of models using different assumptions
about social networks, strategies for belief integration, relative importance of cognitive and
social dissonance, and noise in the updating process. As described in the caption of Figures
2B and 2C (also see Appendix), some models describe empirical trends better than others,
allowing insights into belief dynamics in different contexts. This kind of modeling could be
used to assess in advance the impacts of different educational interventions designed to
increase public science literacy, as well as to explore and predict societal trends and
understand their underlying mechanisms.
BELIEF DYNAMICS FRAMEWORK 16
Box 1
Quantitative implementation of the integrative framework for models of belief dynamics
where the components of the “sampling vector” ⃗δ𝛼 are Kronecker δ-functions:
′
⃗ α = δα,α′ = {1 if 𝛼 = 𝛼 .
δ (3)
0 if 𝛼 ′ ≠ 𝛼
The social dissonance 𝑑𝑖𝑠𝑜𝑐 = 𝑑𝑖,α
𝑠𝑜𝑐
is calculated similarly. In (3) α is held fixed and α′
runs from 1 to 𝑚. We employ here the usual vector notation in which the square of a
vector is a dot (or inner) product of the vector with itself: 𝑣 2 = 𝑣 ⋅ 𝑣.
To get a probability distribution over possible belief states for each node, we
define the total dissonance function for individual i as
⃗ 𝑖 = 𝑑 𝑐𝑜𝑔 + 𝑤𝑑𝑖𝑠𝑜𝑐
𝐻 (5)
𝑖
empirical trends (thick black line). Panel C shows how the framework can be used to
extrapolate empirical survey findings on political preferences and network connections 148
to broader populations. Here we simulated networks of voters in the 2018 elections in Ohio
assuming lower (top subpanels) or higher (bottom subpanels) homophily, and used
different assumptions about how voters integrate their internal preferences and information
from their social circles to predict likely belief dynamics (average or majority rule). The
predictions are compared to actual observed trends in political preferences in a probabilistic
election poll. Assuming a high homophily network leads to better correspondence of
predicted and empirical trends.
4. Conclusion
We reviewed a range of findings and models about belief dynamics from different
disciplines and proposed a quantitative framework that could be used to compare the many
different existing models theoretically and empirically. We hope that this review and
framework will enable integration of important findings so far largely developed
independently in different research fields, and contribute to a better understanding of
dynamics of beliefs about science, health, politics, and other issues arising in human
societies.
BELIEF DYNAMICS FRAMEWORK 20
Appendix
To implement models within the framework and test them theoretically and
empirically, we start by assigning numerical values to each of the components of the belief
vector 𝜎𝑖 . Our first implementation (Figure 2A) corresponds to theoretical tests of the
framework: we initially assign every individual one of 𝑚 belief states (1,0,0, … ,0,0),
(0,1,0, … ,0,0), ..., (0,0,0, … ,0,1), chosen randomly and independently of all neighbors and
𝑐𝑜𝑔
the individual's own cognitive field ℎ𝑖 . In all implementations shown here, the belief
updating is done synchronously; at every timestep, all agents simultaneously measure the
⃗ 𝛼 , where α is an index for the
distance of all possible future belief states (represented by δ
different belief states).
The three patterns in Figure 2A, showing consensus, clustering, and polarization,
were produced by varying only the parameters 𝛽 and 𝑤, and keeping the network structure
and the belief updating rule the same. The networks were generated by stochastic block
models with low homophily, where individuals were parts of social circles loosely
connected to each other and their initial beliefs were assigned randomly (Figure 2A,
leftmost subpanel). At each time step, individuals updated their beliefs using a simple
unweighted average of beliefs of their social contacts. When both the weight 𝑤 on the
social dissonance and the noise strength were relatively high (𝑤 = 2, β = 5, left panel),
populations always reached consensus. With the same high weight on social dissonance but
much lower levels of noise (𝑤 = 2, β = 30, middle panel), patterns akin to polarization
emerged in around 10% of the runs. Finally, when social and cognitive dissonances were
weighted equally (𝑤 = 1) and with the same level of low noise (β = 30, right panel),
individuals clustered into subgroups, with similar beliefs within each subgroup but
differing beliefs between subgroups. Further study of different combinations of the
parameters w and β on different social network structures and with different cognitive
strategies will enable comparisons between the predictions of different models.
Figure 2B and 2C show results from two empirical studies investigating belief
trends. Figure 2B shows the results of a study of beliefs about genetically modified (GM)
food, in which participants were asked to report their beliefs about GM food two times in
each of three study waves across several months 147. The initial conditions used here are the
beliefs reported by participants at the start of the survey on the safety of GM food.
BELIEF DYNAMICS FRAMEWORK 21
Participants also reported twelve other beliefs about moral issues related to GM food 149, as
well as perceived beliefs of different social groups (family, online community, scientists,
doctors, etc.). The underlying social network consists of a series of ego networks where
participants are connected with directed links to each of those social groups (see the
leftmost panel of Figure 2B), weighted by the reported relevance of each of those groups
for participants’ own beliefs about GM food safety. In the second wave participants
received information about recent scientific studies showing no evidence that GM food is
not safe to eat. As a result, most participants’ beliefs became more positive for a while. We
compared these empirical trends with model predictions assuming three different rules for
determining cognitive and social fields based on participants moral beliefs and social
perceptions: simple averaging rule; importance-weighted averaging (where moral beliefs
and social groups were weighted by their reported relevance for participant’s beliefs about
the safety of GM food), and a rule that only took into account the most important moral
belief and social group. The parameters 𝑤 and β were fitted at each time step by a grid
search restricted to half of the participants; these results were then used to predict the
beliefs of the other half. As the middle and right subpanels of Figure 2B show, weighted
averaging proved to be best in prediction of empirical trends. The models correctly
predicted that the greatest increase in positive opinions about GM food occurred
immediately after the intervention, which was followed by a decrease later on.
Finally, Figure 2C presents a study of political preferences as they developed in the
months before the 2018 US elections. We used a probabilistic survey 148 to determine initial
beliefs, internal preferences, and network connections of voters in different US states
(Figure 2C, leftmost panel), and then used our models to predict trajectories of their
support for Republicans, Democrats, other candidates, or for not voting. Voters are
assumed to update their preferences to minimize the dissonance between their own political
leanings (inferred from their vote in previous elections), and their current social
surroundings. We investigated predicted trajectories of their beliefs using two different
rules, unweighted averaging and majority rule, to integrate information from their social
environments. We also compared two different assumptions about their social networks
(Figure 2C, leftmost panel), characterized by either low or high homophily. The results
show that the empirical trends inferred from the probabilistic survey corresponded best to
BELIEF DYNAMICS FRAMEWORK 22
predictions assuming a network with high homophily. Preferences for voting Republican,
and for not voting at all, were best described by a model assuming unweighted averaging,
while preferences for voting Democratic were best described by a model assuming majority
rule.
References
House, 1951).
44. Abelson, R. P. Mathematical models in social psychology. Adv. Exp. Soc. Psychol.
3, 1–54 (1967).
45. Albert, R. & Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod.
Phys. 74, 47–97 (2002).
46. Easley, D. & Kleinberg, J. Networks, crowds, and markets. (Cambridge University
Press, 2010).
47. The structure and dynamics of networks. (Princeton University Press, 2006).
48. Watts, D. J. Six degrees: The science of a connected age. (Norton, 2004).
49. Christakis, N. A. & Fowler, J. H. Connected: The amazing power of social networks
and how they shape our lives. (Harper Collins, 2010).
50. Centola, D. An experimental study of homophily in the adoption of health behavior.
Science (80-. ). 334, 1269–1272 (2011).
51. Golub, B. & Jackson, M. O. How homophily affects the speed of learning and best-
response dynamics. Q. J. Econ. 127, 1287–1338 (2012).
52. Shalizi, C. R. & Thomas, A. C. Homophily and contagion are generically
confounded in observational social network studies. Sociol. Methods Res. 40, 211–
239 (2011).
53. Jasny, L., Waggle, J. & Fisher, D. R. An empirical examination of echo chambers in
US climate policy networks. Nat. Clim. Chang. 5, 782–786 (2015).
54. Key, V. O. The responsible electorate. (Vintage, 1966).
55. Tourangeau, R., Rasinski, K. A. & D’Andrade, R. Attitude structure and belief
accessibility. J. Exp. Soc. Psychol. 27, 48–75 (1991).
56. Judd, C. M., Drake, R. A., Downing, J. W. & Krosnick, J. A. Some dynamic
properties of attitude structures: Context-induced response facilitation and
polarization. J. Pers. Soc. Psychol. 60, 193–202 (1991).
57. Paulhus, D. L. Self-deception and impression management in test responses. in
Personality assessment via questionnaires 143–165 (Springer, 1986).
58. von Hippel, W. & Trivers, R. The evolution and psychology of self-deception.
Behav. Brain Sci. 34, 1–16 (2011).
59. Galesic, M., Olsson, H. & Rieskamp, J. A sampling model of social judgment.
BELIEF DYNAMICS FRAMEWORK 26
18026-2_15.
74. Petty, R. E. & Cacioppo, J. T. The elaboration likelihood model of persuasion. in
Advances in Experimental Social Psychology vol. 19 123–205 (1986).
75. Gigerenzer, G., Todd, P. M. & the ABC Research Group. Simple heuristics that
make us smart. (Oxford University Press, 1999).
76. Hertwig, R., Hoffrage, U. & the ABC Research Group. Simple heuristics in a social
world. (Oxford University Press, 2012).
77. Hoppitt, W. & Laland, K. N. Social learning: An introduction to mechanisms,
methods, and models. (Princeton University Press, 2013).
78. Condorcet, M. Essai sur l’application de l’analyse à la probabilité des décisions
rendues à la pluralité des voix [Essay on the application of analysis to the
probability of majority decisions]. (Imprimerie Royale, 1785).
79. Grofman, B., Owen, G. & Feld, S. L. Thirteen theorems in search of the truth.
Theory Decis. 15, 261–278 (1983).
80. Galam, S. Sociophysics: A review of Galam models. Int. J. Mod. Phys. C 19, 409–
440 (2008).
81. Rogers, E. M. Diffusion of innovations. (Simon & Shuster, 1995).
82. Hastie, R. & Kameda, T. The robust beauty of majority rules in group decisions.
Psychol. Rev. 112, 494–508 (2005).
83. Einhorn, H. J. & Hogarth, R. M. Unit weighting schemes for decision making.
Organ. Behav. Hum. Perform. 13, 171–192 (1975).
84. Dawes, R. M. The robust beauty of improper linear models in decision making. Am.
Psychol. 34, 571–582 (1979).
85. List, C. & Goodin, R. E. Epistemic democracy: Generalizing the condorcet jury
theorem. J. Polit. Philos. 9, 277–306 (2001).
86. Schelling, T. Micromotives and macrobehavior. (Norton, 1978).
87. Centola, D. & Macy, M. Complex contagions and the weakness of long ties. Am. J.
Sociol. 113, 702–734 (2007).
88. Granovetter, M. Threshold models of collective behavior. Am. J. Sociol. 83, 1420–
1443 (1978).
89. Glaeser, E. L. & Sunstein, C. R. Extremism and social learning. J. Leg. Anal. 1,
BELIEF DYNAMICS FRAMEWORK 28
263–324 (2009).
90. Berkes, F. Evolution of co-management: Role of knowledge generation, bridging
organizations and social learning. J. Environ. Manage. 90, 1692–1702 (2009).
91. Boyd, R. & Richerson, P. J. The origin and evolution of cultures. (Oxford University
Press, 1995).
92. Kendal, R. L. et al. Social learning strategies: Bridge-building between fields.
Trends Cogn. Sci. 22, 651–665 (2018).
93. Asch, S. E. Forming impressions of personality. J. Abnorm. Soc. Psychol. 41, 258–
290 (1946).
94. Claidière, N. & Whiten, A. Integrating the study of conformity and culture in
humans and nonhuman animals. Psychol. Bull. 138, 126–145 (2012).
95. Moscovici, S. & Zavalloni, M. The group as a polarizer of attitudes. J. Pers. Soc.
Psychol. 12, 125–135 (1969).
96. Sorkin, R. D., West, R. & Robinson, D. E. Group performance depends on the
majority rule. Psychol. Sci. 9, 456–463 (1998).
97. Breiman, L. Bagging predictors. Mach. Learn. 24, 123–140 (1996).
98. Krapivsky, P. L. & Redner, S. Dynamics of majority rule in two-state interacting
spin systems. Phys. Rev. Lett. 90, 238701 (2003).
99. Payne, J. W., Bettman, J. R. & Johnson, E. J. The adaptive decision maker.
(Cambridge University Press, 1993).
100. Anderson, N. H. Integration theory and attitude change. Psychol. Rev. 78, 171–206
(1971).
101. Soll, J. B. & Larrick, R. P. Strategies for revising judgment: How (and how well)
people use others’ opinions. J. Exp. Psychol. Learn. Mem. Cogn. 35, 780–805
(2009).
102. Brehmer, B. Social judgment theory and the analysis of interpersonal conflict.
Psychol. Bull. 83, 985–1003 (1976).
103. Friedkin, N. E. & Johnsen, E. C. Social influence and opinions. J. Math. Sociol. 15,
193–206 (1990).
104. Golub, B. & Jackson, M. O. Naïve learning in social networks and the wisdom of
crowds. Am. Econ. J. Microeconomics 2, 112–149 (2010).
BELIEF DYNAMICS FRAMEWORK 29
105. Grimm, V. & Mengel, F. Experiments on belief formation in networks. J. Eur. Econ.
Assoc. 18, 49–82 (2020).
106. Acemoglu, D., Ozdaglar, A. & ParandehGheibi, A. Spread of (mis)information in
social networks. Games Econ. Behav. 70, 194–227 (2010).
107. Jadbabaie, A., Molavi, P., Sandroni, A. & Tahbaz-Salehi, A. Non-Bayesian social
learning. Games Econ. Behav. 76, 210–225 (2012).
108. Clifford, P. & Sudbury, A. A model for spatial conflict. Biometrika 60, 581–588
(1973).
109. Holley, R. A. & Liggett, T. M. Ergodic theorems for weakly interacting infinite
systems and the voter model. Ann. Probab. 3, 643–663 (1975).
110. Redner, S. A guide to first-passage processes. (Cambridge University Press, 2001).
111. Boyd, R. & Richerson, P. J. Culture and the evolutionary process. (University of
Chicago Press, 1985).
112. Newman, M. E. J. The structure and function of complex networks. SIAM Rev. 45,
167–256 (2003).
113. Gigerenzer, G. & Goldstein, D. G. Reasoning the fast and frugal way: Models of
bounded rationality. Psychol. Rev. 103, 650–669 (1996).
114. Reber, R. & Schwarz, N. Effects of perceptual fluency on judgments of truth.
Conscious. Cogn. 8, 338–342 (1999).
115. Schooler, L. J. & Hertwig, R. How forgetting aids heuristic inference. Psychol. Rev.
112, 610–628 (2005).
116. Oppenheimer, D. M. The secret life of fluency. Trends Cogn. Sci. 12, 237–241
(2008).
117. Schwarz, N., Bless, H., Strack, F., Klumpp, G. & et al. Ease of retrieval as
information: Another look at the availability heuristic. J. Pers. Soc. Psychol. 61,
195–202 (1991).
118. Weaver, K., Garcia, S. M., Schwarz, N. & Miller, D. T. Inferring the popularity of
an opinion from its familiarity: A repetitive voice can sound like a chorus. J. Pers.
Soc. Psychol. 92, 821–833 (2007).
119. Bandura, A. Psychotherapy as a learning process. Psychol. Bull. 58, 143–159 (1961).
120. Henrich, J. & Gil-White, F. J. The evolution of prestige: Freely conferred deference
BELIEF DYNAMICS FRAMEWORK 30
135. Schweighofer, S., Schweitzer, F. & Garcia, D. A weighted balance model of opinion
hyperpolarization. JASSS 23, 1 (2020).
136. Cartwright, D. & Harary, F. Structural balance: a generalization of Heider’s theory.
Psychol. Rev. 63, 277–293 (1956).
137. Duggan, M. & Smith, A. The political environment on social media. (2016).
138. McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: Homophily in
social networks. Annu. Rev. Sociol. 27, 415–444 (2001).
139. Schelling, T. C. Models of segregation. Am. Econ. Rev. 59, 488–493 (1969).
140. Schelling, T. C. Dynamic models of segregation. J. Math. Sociol. 1, 143–186 (1971).
141. Skyrms, B. & Pemantle, R. A dynamic model of social network formation. Proc.
Natl. Acad. Sci. 97, 9340–9346 (2000).
142. Holme, P. & Newman, M. E. J. Nonequilibrium phase transition in the coevolution
of networks and opinions. Phys. Rev. E 74, 056108 (2006).
143. Almaatouq, A. et al. Adaptive social networks promote the wisdom of crowds. Proc.
Natl. Acad. Sci. 117, 11379–11386 (2020).
144. Belaza, A. M. et al. Statistical physics of balance theory. PLoS One 12, 1–19 (2017).
145. Galesic, M. & Stein, D. l. Statistical physics models of belief dynamics: Theory and
empirical tests. Phys. A Stat. Mech. its Appl. 519, 275–294 (2019).
146. Busemeyer, J. R., Kvam, P. D. & Pleskac, T. J. Markov versus quantum dynamic
models of belief change during evidence monitoring. Sci. Rep. 9, 18025 (2019).
147. van der Does, T., Galesic, M., Stein, D. L. & Fedorov, N. Moral and social
foundations of beliefs about scientific issues: The case of GM food and vaccination.
(2020).
148. Olsson, H., Bruine de Bruin, W., Galesic, M. & Prelec, D. Bayesian weighting using
wisdom of crowds improves election forecasts. (2020).
149. Graham, J. et al. Mapping the moral domain. J. Pers. Soc. Psychol. 101, 366–385
(2011).
BELIEF DYNAMICS FRAMEWORK 32
Acknowledgements: MG, HO, and JD were supported in part by grants from the
National Science Foundation (BCS-1918490), MG and TvdD were supported in part by a
grant from the National Institute of Food and Agriculture (NIFA- 2018-67023-27677). The
funders had no role in study design, data collection and analysis, decision to publish or
preparation of the manuscript. Any opinions, findings and conclusions or recommendations
expressed in this material are those of the authors and do not necessarily reflect the views
of the funding agencies.
Competing interests: The authors declare no competing interests.