You are on page 1of 43

The Irrationality of Disagreement

Robin Hanson
Associate Professor of Economics
George Mason University
We Disagree, Knowingly

• Stylized Facts:
– Argue in science/politics, bets on stocks/sports
– Especially regarding ability, when hard to check
• Less on “There’s another tree”
– Dismiss dumber, but not defer to smarter
– Disagree not embarrass, its absence can
– Given any free pair, find many disagree topics
• Even people who think rationals should not disagree
• Precise: we can publicly predict direction of
other’s next opinion, relative to what we say
We Can’t Agree to Disagree
Nobel Prize 2005 his most cited paper by x2
Agent 1 Info Set
Aumann 1976 assumed:
• Any information
• Of possible worlds
• Common knowledge
• Of exact E1[x], E2[x]
• Would say next
• For Bayesians
• With common priors
• If seek truth, not lie, josh,
or misunderstand
Common Knowledge Set Agent 2 Info Set
John estimates car age E1[x]

“It wasn’t “I’ve never
shiny” been wrong
before”
“I can still
“It picture it”
sounded “I had a good
old” viewing angle”
“Fred said so”

age 7 here “Mary is blind”
age 2 here
Mary estimates
car age E2[x]

age 7 here
age 2 here
Agree If Averages Same
E1[x] =
E2[x]

Aumann (1976)
Annals Stat. 4(6):1236-9.
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
Disagreement Is Unpredictable

I (ω ) knows X ω = v if I(ω) ⊆ {ω : X ω = v}
meet IΛ J (ω ) is common knowledge (c.k.)
Vω = E µ [ X ω | I (ω )], Yω = E µ [ X ω | J (ω )]
Z ω = E µ [ Yω | I (ω )], Pω = 1[ Z ω − Vω > 0]
Note : E µ [ Z ω − Vω | IΛ J (ω )] = 0
If c.k. that J knows P, then c.k. that Z ω = Vω !
Hanson (2002) Econ. Lett. 77:365–369.
Experiment Shows Disagree
E.g.: What % of U.S. say dogs better pets than cats?
time Example
• A gets clue on X • B gets clue on X
• A1 = A’s guess of X
30%
• B told A1
70% • B1 = B’s guess of X

40% • B2 = B’s guess of A2

• A told Sign(B2-B1) “low”
• A2 = A’s guess of X 40%
• Loss (A1-X)2+(A2-X)2 • Loss (B1-X)2+(B2-A2)2
A neglects clue from B B reliably predicts neglect
Sample Percent Questions
• What percent of people in the U.S. agree with this
opinion? “God created humans in basically their present
form in the last 10,000 years.” (Gallup,1999)
• What percent of people in the U.S. agree with this
opinion? “The U.S. government is hiding that it knows of
the existence of aliens.” (CNN 1994)
• By weight, what percent of cheddar cheese is protein?
(U.S. Department of agriculture)
• What percent of the population of India is literate?
(Nation of India)
Experiment Features
• All answers integers in [0,100], either real % or
XA + XB, each from 6s dice: [0,10,20,30,40,50]
• All by hand, subjects roll dice first, for credibility
• Subjects told all after each round, to help
learning
• Zipper design, to minimize strategic interactions
• Lottery payoff, to reduce risk aversion
• Double dice, for easy squared-error penalty
• Only tell B-sign, to reduce signaling ability
Complexity of Agreement

Can exchange 100 bits, get agree to within 10% (fails 10%).
Can exchange 106 bits, get agree to within 1% (fails 1%).

“We first show that, for two agents with a common
prior to agree within ε about the expectation of a
[0,1] variable with high probability over their prior, it
suffices for them to exchange order 1/ε2 bits. This
bound is completely independent of the number of
bits n of relevant knowledge that the agents have.
… we give a protocol ... that can be simulated by
agents with limited computational resources.”
Aaronson (2005) Proc. ACM STOC, 634-643.
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
Generalized Beyond Bayesians
• Possibility-set agents: if balanced
(Geanakoplos ‘89), or “Know that they
know” (Samet ‘90), …
• Turing machines: if can prove all
computable in finite time (Medgiddo ‘89,
Shin & Williamson ‘95)
• Ambiguity Averse (maxact minp in S Ep[Uact])
• Many specific heuristics …
• Bayesian Wannabes
Consider Bayesian Wannabes
~
X i (ω ) = E [ X (ω ) | I i (ω )] + eiω [ X ]
µi
Pure Agree
Disagree Sources to Disagree? A.D. X(ω ) ⇒
A.D. E[Y(ω ) | Ω]
Prior µ1 (ω ) ≠ µ2 (ω ) Yes Either combo
Info I1 (ω ) ≠ I 2 (ω ) No implies pure
version!
Errors e1ω ≠ e2ω Yes
A.D. X(ω ) ⇒
Ex: E 3.14,
  E 2 2/7 A.D. Y(ω ) = Y
Notation

State ω ∈ Ω (finite)
Random variable X (ω ) ∈ [ X , X + ∆X ]
Information I i (ω ) ∈ I i (a partition)
Bayesian estimateX i (ω ) ≡ E µi [ X (ω ) | I i (ω )]
~ ~
Wannabe estimate X i (ω ) = Eiω [ X ] = X i (ω ) + eiω [ X ]

Assume : eiω = eiω ′ ∀ω ′ ∈ I i (ω )
More Notation

Bias ei [ X | S ] ≡ E µ [eiω [ X ] | S ]
~
Expect unbiased E iω [ei [ X | S ]] = 0

Calibrated error eiω [ X ] = miω [ X ] + ci [ X ]
Choose ci at Di (ω ) ∈ Di coarsens I i

Lemma 1 The ci which mins
~
E[( X i − X ) 2 | Di (ω)] sets ei [ X | Di (ω)] = 0.
Still More Notation
~q ~
Estimation set Bi (E ) ≡ {ω | E iω [ µ i ( E | I i (ω ))] ≥ q}
~
N q-agree that E in C E C ⊂ i∈N Biq (C  E )
~ ~
i,j ε-disagree about X {ω | X i (ω ) ≥ X j (ω ) + ε } ≡ Fε [ X ]
~ ~
i,j α , β -disagree about X {ω | X i (ω ) ≥ α > β ≥ X j (ω )}

i, j q-agree to ε-disagree {i, j} q-agree that i, j ε-disagree
Let 1,2 Agree to Disagree Re X
~q
A = C F  Fε [ X ], Bi = Bi (A) (coarsens Di )
~ ~
ei = ei [ X | Bi ], pi = µ ( A|Bi ) p2 ≡ E 2ω [ p0 ]
p0 ≡ min( p1 , p2 ), εˆ( p ) ≡ pε − 2(1 − p ) ∆X

Lemma 4 : ε ≥ 0 and e2 = 0 imply e1 ≥ εˆ( p0 )
~ ~
E 2ω [e1 ] ≥ ε ( p2 )
ˆ (3)
~
E1ω [e1 ] = 0 (4)
Theorems
Re agents 1,2 q-agreeing to ε-disagree about X ,
IF at some ω equations 3 and 4 are satisfied,
1
THEN at ω agents 2,1 q-agree to
εˆ( ~
p2 ),0-disagree about e1
IF agents 1,2 q-agree to ε-disagree (within C ) that
they ε-disagree about X and satisfy eqns 3 and 4,
2 THEN ( within C ) agents 2,1 q-agree to
εˆ ( ~p2 ),0-disagree about e1
Theorem in English

• If two Bayesian wannabes
– nearly agree to disagree about any X,
– nearly agree each thinks himself nearly unbiased,
– nearly agree that one agent’s estimate of other’s bias
is consistent with a certain simple algebraic relation
• Then they nearly agree to disagree about Y,
one agent’s average error regarding X.
(Y is state-independent, so info is irrelevant).
Hanson (2003) Theory & Decision 54(2):105-123.
Wannabe Summary
• Bayesian wannabes are a general model of
computationally-constrained agents.
• Add minimal assumptions that maintain some
easy-to-compute belief relations.
• For such Bayesian wannabes, A.D. (agreeing to
disagree) regarding X( ) implies A.D. re Y( )=Y.
• Since info is irrelevant to estimating Y, any A.D.
implies a pure error-based A.D.
• So if pure error A.D. irrational, all are.
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
Which Priors Are Rational?
Prior = counterfactual belief if same min info
• Extremes: all priors rational, vs. only one is
– Can claim rational unique even if can’t construct (yet)
• Common to say these should have same prior:
– Different mental modules in your mind now
– You today and you yesterday (update via Bayes’ rule)
• Common to criticize self-favoring priors in others
– E.g., coach favors his kid, manager favors himself
– “I (Joe) beat Meg, but if I were Meg, Meg beats Joe”
• Prior origins not special => priors same
Origins of Priors
• Seems irrational to accept some prior origins
– Imagine random brain changes for weird priors
• In standard science, your prior origin not special
– Species-common DNA
• Selected to predict ancestral environment
– Individual DNA variations (e.g. personality)
• Random by Mendel’s rules of inheritance
• Sibling differences independent of everything else!
– Culture: random + adapted to local society
• Turns out you must think differing prior is special!
• Can’t express these ideas in standard models
Standard Bayesian Model

Agent 1 Info Set
A Prior

Agent 2 Info Set

Common Kn. Set
An Extended Model

Multiple Standard
Models With
Different Priors
Standard Bayesian Model

State ω ∈ Ω (finite) Agent i ∈ {1,2 ,...,N }

Prior pi (ω ) , p ≡ ( p1,p2 ,...,pN )

Info Π it (ω ), Π t ≡ ( Π1t ,Π t2 ,...,Π tN ), Π ≡ ( Π t )t∈T

Belief pitω ( E ) = pi ( E | Π it (ω )), E⊂Ω

In Model (Ω, p, Π ), p is common knowledge
Extending the State Space

Possible priors pi ∈ P, p ∈ PN
~ ~
Pre - state ω ≡ (ω , p ) ∈ Ω ≡ Ω × PN

~ As event

E ≡ {(ω, p ) : ω ∈ E}, p ≡ { (ω, p′) : p′ = p}
~ ~
pi ( E | p ) ≡ pi ( E ) (1)
~t
′ ′ ′ ′
Π i ((ω, p )) ≡ {(ω , p ) : p = p, ω ∈ Π i (ω )}
t
An Extended Model
Pre - info Γit (ω ), Γ t = ( Γ1t , Γ2t ,..., ΓNt ), Γ = ( Γt )t∈S
~ ~t ~
Γ (ω ) = Π i (ω ), ∀t ∈ T ∩ S
i
t

~ ), q = ( q ,q ,...,q ), allow q ≠ q
Pre - prior qi (ω 1 2 N i j

~ ~ ~
qi ( E | p ) = pi ( E | p ) (2)

In Model (Ω, P, Π, q, Γ), p is common knowledge
~t
relative to Π , t ∈ T , but not necessarily Γ , t ∈ S .
t
My Differing Prior Was Made Special
My prior and any ordinary event E are informative about
each other. Given my prior, no other prior is informative
about any E, nor is E informative about any other prior.

~ ~
(1) & (2) ⇒ qi ( E | p1 , p2 ,..., pi ,..., pN , B ) = pi ( E | B )

In P, A is independent of B given C if P(A|BC) = P(A|C).
~
Theorem 1 In qi , any E is independent of any p j ≠i , given pi .
~ ~
Theorem 2 In qi , any E depends on pi via qi ( E | pi ) = pi ( E ).
Corollaries
~ ~
Corollary 1 If qi ( E | pi = P ) = qi ( E | pi = P′), then P ( E ) = P′( E ).

My prior only changes if events are more or less likely.
~ ~
Corollary 2 If qi ( E | pi = P, p j = P ) = qi ( E | pi = P′, p j = P ),

then P( E ) = P′( E ).
If an event is just as likely in situations where my prior is
switched with someone else, then those two priors
assign the same chance to that event.
Only common priors satisfy these and symmetric prior origins.
A Tale of Two Astronomers
• Disagree if universe open/closed
• To justify via priors, must believe:
“Nature could not have been just as
likely to have switched priors, both if
open and if closed”
“If I had different prior, would be in
situation of different chances”
“Given my prior, fact that he has a
particular prior says nothing useful”
All false for brothers’ genetic priors!
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
Why Do We Disagree?
• Theory or data wrong? • They seem robust
• Few know theory? • Big change coming?
• Infeasible to apply? • Need just a few adds
• We lie?
• Exploring issues? • We usually think not,
• Misunderstandings? and effect is linear
• We not seek truth?
• Each has prior: • But we complain of
“I reason better” ? this in others
Our Answer: We Self-Deceive
• We biased to think better driver, lover, …
“I less biased, better data & analysis”
• Evolutionary origin: helps us to deceive
– Mind “leaks” beliefs via face, tone of voice, …
– Leak less if conscious mind really believes
• Beliefs like clothes
– Function in harsh weather, fashion in mild
• When made to see self-deception, still disagree
– So at some level we accept that we not seek truth
How Few Meta-Rationals (MR)?
Meta-Rational = Seek truth, not lie, not self-
favoring-prior, know disagree theory basics
• Rational beliefs linear in chance other is MR
• MR who meet, talk long, should see are MR?
– Joint opinion path becomes random walk
• We see no virtually such pairs, so few MR!
– N each talk 2T others, makes ~N*T*(%MR)2 pairs
– 2 billion ea. talk to 100, if 1/10,000 MR, get 1000 pairs
• None even among accept disagree irrational
When Justified In Disagree?
When others disagree, so must you
• Key: relative MR/self-deception before IQ/info
• Psychology literature self-deception clues:
– Less in skin response, harder re own overt behaviors, older kids
hide better, self-deceivers have more self-esteem, less
psychopathology/depression
• Clues?: IQ/idiocy, self-interest, emotional arousal,
formality, unwilling to analyze/consider
– Self-deceptive selection of clues use
• Need: data on who tends to be right if disagree!
– Tetlock shows “hedgehogs” wrong on foreign events
– One media analysis favors: longer articles, in news vs editorial
style, by men, non-book on web or air, in topical publication with
more readers and awards
We Can’t Agree to Disagree
Aumann in 1976: Since generalized to:
• Any information
• Of possible worlds Impossible worlds
• Common knowledge Common Belief
• Of exact E1[x], E2[x] A f(•, •), or who max
• Would say next Last ±(E1[x] - E1[E2[x]])
• For Bayesians At core, or Wannabe
• With common priors Symmetric prior origins
• If seek truth, not lie or
misunderstand
Implications
• Self-Deception is Ubiquitious!
• Facts may not resolve political/social disputes
– Even if we share basic values
• Let models of academics have non-truth-seekers
• New info institution goal: reduce self-deception
– Speculative markets do well; use more?
• Self-doubt for supposed truth-seekers
– “First cast out the beam out of thine own eye; and then
shalt thou see clearly to cast out the mote out of thy
brother's eye.” Matthew 7:5
Common Concerns
• I’m smarter, understand my reasons better
• My prior is more informed
• Different models/assumptions/styles
• Lies, ambiguities, misunderstandings
• Logical omniscience, act non-linearities
• Disagree explores issue, motivates effort
• We disagree on disagreement
• Bayesian “reductio ad absurdum”
Counter Example
• P(y) = exp(-by) π ( y ) ∝ exp(− βy )
• Asodpf
s = y +ε

E[ε 2 ] =

U ( s′, b) = bs′ + ln( P ( y* | s′))

ε ≈ N (0, σ 2 ),η ≈ N (0,θ 2 )
P ( s′ | s, b) ∝ exp( E[U ( s′, b) | s, b] / r )

P ( y | s ) = N ( s − βσ 2 , σ 2 )