Professional Documents
Culture Documents
Business Decisions
SOC 138/ECO 148
1
Class Overview
■ Three Components:
Part 1: Models of Rational and
Irrational Decision Making Behavior
Part 2: Pathologies in Individual
Decision Making
Part 3: Decision Making in Social
Contexts
2
Life as a Card Force
■ The card force as a metaphor for decision
making in the real world
It often appears as though we choose our actions
from a vast set of possibilities
In practice, however, our options are often highly
constrained by social structure (e.g., market forces,
laws, organizational norms, etc.)
As with the card force, we may not be aware of the
factors which determine our destiny
■ In order to make optimal decisions, we must
have a realistic idea of our own efficacy – this
requires thinking carefully about the “big
picture”
3
The Illusion of Control
■ In the card force, the “trick” lies in making the subject
think that they are in control (while the magician runs
the show)
■ Predisposing elements (Langer, Gold, etc.)
(Apparent) choice or other direct involvement (especially
physical contact)
Familiarity with options or “decision” process
Differential outcomes (esp if unpredictable)
Early “success” (i.e., positive outcomes)
Social comparison/skill frame
■ Compare to illusionary expertise – here we are fooled
into exaggerating our own agency
4
Examples
■ “Skill” in the casino
Regular roulette players believe the game to be skill-based;
believe croupiers to be able to pick which numbers will come
up
Dice players throw harder when trying to roll high numbers
than low numbers
■ Lottery “choice”
Langer found that lottery tickets chosen by subjects were
valued at over four times those given randomly
“Dream books” and other lottery paraphenalia
■ Predicting coin tosses (Langer and Roth)
Asked to predict coin tosses, given data rigged to show
accuracy of 50%; those with early success more likely to think
themselves “better than average”
25% subjects believed that distractions would inhibit
performance; 40% thought that practice would help
5
Structure and Opportunity
■ While good decision making lets you make the most of
what you’re dealt, it is social structure which often
determines whether you get a winning or losing hand
Effects are not always obvious to individuals, since they occur
at the macro level
■ Many kinds of structures matter:
Spatial (school quality, access to venture capital)
Relational (access to information, financial aid, assistance)
Market (competition, stability of supply/demand, barriers to
entry)
Demographic (age, discrimination, biased legal practices
(marriage bans, etc.))
6
Example: Getting a Job
■ Much job search activity
occurs through informal Joe
Job
networks (i.e., personal
ties) Julie
■ Granovetter’s argument Josh
and differential Jim
opportunity
Personal ties are homophilous Joan
Jobs are sought through
personal contacts Jeff
Those in higher SES groups
have a relational edge Jane John
Importance of diversity in
networks, maintenance of
“weak ties”
7
Example: Managerial Interventions
8
The Company Behind the Chart
9
Selection Effects
■ In many cases, what we observe regarding
outcomes is due not to choice, but to selection
effects
Only certain persons/organizations get the chance to engage
in certain behaviors
We may only observe persons/organizations for whom certain
outcomes obtain
Compare with confirmation bias – similar effects, but based on
structure rather than cognition
■ Common variants:
“Winners” are more visible than “losers” (e.g., rags to riches)
Some opportunities only arise for “winners” (e.g.,
philanthropy)
Certain opportunities attract “losers” (e.g., self-selected
surveys, customer feedback lines)
10
Example: The Selection Scam
■ The procedure (described by DeGroot (1989),
among others):
Week 1: Send advertisement to k2n individuals, with (binary)
market prediction (half “up,” half “down”)
Week 2: For the half receiving the correct prediction,
send new advertisement, with new prediction
Weeks 3—n-1: Repeat
Week n: Offer to sell your prognosticative services to the
k remaining individuals for a large fee
■ Why does it work?
Hidden selection mechanism – new predictions not
independent of past sequence
We fail to consider whether the structure of the
interaction might provide us with biased information
11
The Illusion of Powerlessness
■ In some cases,we make the reverse error: we
assume that the the world is less contingent on
our action than it really is
Results in failure to recognize and exploit opportunities
(learned helplessness)
Can foster improper attribution of outcomes to external
factors
■ Generally occurs where causal chain between
action and outcome is long, complex, stochastic,
and/or obscure
Self-fulfilling prophecies
Risk behavior (smoking, diet)
Ideomotor effect
12
Example: The Ideomotor Effect
■ Little-known fact: unconscious muscle
movements can be amplified by certain
mechanisms to produce noticeable effects
Mechanism behind dowsing rods, automatic writing,
ouija boards
Subjects do not perceive themselves as being the
source of the motion (even though this can be
shown)
■ Ideomotor effect results in spurious causal
beliefs
Dowsers become convinced that an external force
moves the rod, even when the reverse is
demonstrable
Used to sell “quack” technologies (see readings)
13
Summary
■ Successful decision making depends upon
understanding the link between our actions and
eventual outcomes
This link can be hard to discern
■ In many cases, our choices and information are heavily
constrained by social structure
Where these constraints are difficult to perceive, we may
experience an illusion of control
When our information is biased, we may fall prey to selection
effects
■ Occasionally, however, the constraints we face are
less severe than they appear
Here, we may be subject to the illusion of powerlessness
■ Next time: context, roles, and attribution
14
Context, Roles, and Attribution
Business Decisions
SOC 138/ECO 148
1
I
LOVE
PARIS IN THE
THE SPRINGTIME
2
Expectancy and Selective Perception
■ Like memory, perception is a constructive process
Sensory inputs are heavily modified to clean up noise,
“patch” inconsistencies, etc.
Modifications are in the direction of familiar and
expected patterns – to an extent, we really do see what
we expect to see
■ Expectations often affected by context
We expect consonance between elements of our
environment
We automatically use some features of our environment
to infer others
In some cases, of course, these contextual cues may be
very misleading….
3
Examples: Complaints and Rivalries
■ Sports rivalries
Hastorf and Cantril: perceptions of foul play in
Dartmouth/Princeton game correlated with own affiliation
■ Media bias
Vallone et al.: of those perceiving bias in 1980 presidential
election coverage, apx 90% saw bias as being against their
favored candidate
■ International disputes
Vallone et al.: perceptions of bias in coverage of Beirut
massacre correlated with own position; references to Israel
seen as generally unfavorable by pro-Israeli students,
favorable by pro-Arab students
■ Important implication: if you want good product
reviews, make sure customers are with you, rather
than against you
4
Demand Effects
■ When given a specific task to perform, our
inferences about the purpose of the task can
act as expectations – this generates a demand
effect
Not always unconscious; subjects sometimes
actively assess demands
■ Exacerbating factors
Ambiguous stimuli
Tasks which cannot be completed as requested
Strong “hints” as to the purpose of the task or
intent of the task designer
5
Example: Plasticity
■ Order effects
Primacy
➔
The first of a series of related questions can serve as an
anchor for subsequent responses (e.g., American reporters
in Russia vs. Russian reporters in America)
Recency
➔
When given many response options, subjects become
increasingly likely to choose the last one (e.g., abortion
easier/difficult/status quo vs. easier/status quo/difficult)
■ Pseudo-opinions
When asked about things we don’t understand, we
tend to make things up
➔
Metallic Metals Act: good for America, or national menace?
➔
Hartley’s Danireans, Pireneans, and Wallonians
6
Salience and Causal Attribution
■ Salience: the tendency for a particular attribute
to command attention
Often a result of structural (even spatial) factors
■ Salience can lead to attributions of causation
and significance (even when nonsensical)
Taylor and Fiske: subjects rate discussant seated in
front of them as having controlled discussion
Taylor et al.: subjects rate minority discussant as
being more influential where fewer other minorities
present
McArthur and Post: similar results from brightly
colored shirts, rocking chairs, bright lights, etc.
7
The Fundamental Attribution Error
■ In interpersonal contexts, it is the actors at
hand (and their behaviors) which generally
appear most salient
“Background” factors like the context of the
interaction itself command less attention
■ This saliency leaves the actors highly available
as explanatory devices
Fundamental attribution error: the assumption that
alters’ behaviors result from stable individual traits
■ Additional contributing factor: limited
information
Many situational factors affecting alters are not
available to ego
8
Example: Bad Samaritans
■ Darley and Bateson (1973): giving “good Samaritan”
speech failed to impact willingness to help man in alley
■ Pietromonaco and Nisbett (1982) studied subjects’
predictions regarding behavior of others in Darley and
Bateson situations
Those unaware of results predicted based on religiosity
Those aware of results still predicted based on religiosity (no
significant difference)
■ Fundamental attribution error: although the real
predictor was time pressure, subjects continued to
assume that religiosity was the determinant of helping
Compare with Miller’s (1973) finding of perceptions vis a vis
obedient Milgram subjects as cold, maladjusted, and
aggressive
9
The Flip Side of Fundamental Attribution – Self-Perception
10
Positivity/Negativity in Attributions
■ An important caveat to the above: attributions are
secondarily biased by positive/negative personal
evaluations
For those we like, we are somewhat more likely to see negative
acts as situational and positive acts as dispositional
For those we dislike, the trend is reversed
■ An implication for marketing: the evaluative feedback
loop
If customers like you, they’re more likely to see good features
of your product as typical of your company, and bad features
as flukes caused by situational constraints
If customers dislike you, they’ll be more likely to explain away
good experiences and view the bad ones as being typical of
the firm
Once you get on the treadmill, reversing the trend may be hard
11
Roles and the Availability of Information
■ Informational asymmetry occurs beyond the
ego/alter context
Different social roles provide us with different
opportunities to act, and to be seen acting
➔
Remember the selection effect?
Our ability to correct for this bias is limited
➔
Intrinsically impossible to debias ourselves without
throwing information away
➔
Even this is difficult or impossible, as jury studies suggest
■ Examples:
The janitor versus the CEO
The curse of the vice presidency
12
The “Game Show Host” Effect
■ A striking role effect from Ross et al.:
In a “quiz-show” format, one subject assigned to ask
questions of another subject while third parties watch
➔
“Questioner” made up questions, designed to be difficult
➔
All were aware of the situation
Observers systematically rated questioner as more
knowledgeable overall than respondent
➔
Questioner, respondent did not exhibit such a strong bias
■ This “Game Show Host” effect serves to amplify power
differences
Powerful roles allow for more control over self-presentation
Favorable presentations, in turn, are attributed to individual
superiority by observers
In other words, make sure you’re the one asking the
questions!
13
Attributions of Leadership
■ Role constraints can also make give leaders excessive
credit – or blame
■ The “Illusion of Leadership” (Weber et al., 2001)
Subjects play organizational coordination game
Leader chosen at random, asked to make speech
Large groups usually fail, small groups usually succeed (with
or without a leader)
Subjects tend to credit leader for failure or success, ignoring
group size issues
Groups willing to pay to oust leaders more often in large
groups than small ones
■ The moral: when you are highly visible, expect to serve
as a scapegoat
14
Summary
■ What we perceive is heavily influenced by contextual
factors
Perception “smoothed” towards expectations
Salient stimuli given more weight, given causal attribution
■ Attributions regarding ourselves and others are
similarly affected
We tend to see others’ behaviors as innate, ours as contingent
This is modified by positivity/negativity bias
Where roles expose us to biased information, this too affects
our attributions of those occupying those roles
■ Next time: I feel we’ll cover values and affective
reasoning
15
Values and Affective
Reasoning
Business Decisions
SOC 138/ECO 148
1
The “Squishy” Side of Decision Making
■ We have tended to focus on monetary
questions, easily calculable risks, etc.
■ Business decisions in the real world have
messier elements
E.g., do I accept that promotion and risk losing my
marriage, or do I stay put and risk looking like an
underachiever?
■ To understand these decisions, we need a
better sense of how to factor in complex values
■ Where emotional factors are strong, models
based on affective reasoning may be of use
2
Value Elicitation
■ A basic assumption with which we have
worked: people have well-formed preferences,
and we can (in principle) elicit them
■ Non-trivial problem in reality
As we saw with plasticity, how we respond can be
very context dependent
Especially hard where visceral factors are involved
(Loewenstein)
■ No general solution, although this is a topic of
intense ongoing research
3
Example: Willingness to Pay and Embedding Effects
■
pay (WTP) to preserve the good
Sounds good, but problems have
arisen
?
Embedding effects: almost no change in
WTP as problem size increases for (e.g.)
birds in oil ponds, wetlands in NJ,
wildlife refuges, etc.
Stated WTP predicted by attitude
towards problem, but not sensitive to
cost of the solution – responses are not
economic in nature 4
Affective Reasoning
■ Much of our focus has been on choice models
(and their failings)
■ Another perspective: affective reasoning
We form impressions of people, situations, behaviors,
etc. based on enculturation and experience
We produce behaviors in accordance with our
impressions, as determined by a fixed set of rules
Emphasis on emotional/visceral processes
■ Features
Descriptive, rather than normative
Less focused on uncertainty, payoffs
Particularly good for modeling “snap” judgments,
judgments in emotionally charged environments
5
The EPA Dimensions
■ Osgood and colleagues, in cross cultural
studies, found that humans’ impressions of
actors, behaviors, objects, etc. could be
described by three factors:
“Evaluation”: Good/Desirable vs. Bad/Undesirable
“Potency”: Strong/Powerful vs. Weak/Powerless
“Activity”: Active/Young/Fast vs. Inactive/Old/Slow
■ Impressions vary by culture and context, but
the dimensions themselves remain
■ Using semantic differential scales, we can
measure impressions, e.g., for business
concepts
Increasingly used e.g. in "sentiment analysis"
6
Sample Baseline Impressions
Executive
Firm
Profit
Manager
Fire
Entrepreneur
Cost Reduction
Layoff
Worker
7
Affect Control Theory
■ Affect Control Theory (ACT): Modern, formal (i.e.,
mathematical) theory of affective reasoning and
behavior
Based on work of Heise, but builds on findings by
Osgood, Heider, and others within sociology and
psychology
■ Describes how actors make sense of their
surroundings, form affective judgments
“Do I like this person?”, “How powerful is she?”
■ Allows predictions of judgments and behaviors
Behavior chosen to maintain consistency among
perceptions
Model is not rational, although certain implications are
similar
➔
E.g, if I like myself, I am more likely to direct positive
actions my own way
8
Understanding Events
■ The fundamental unit of ACT is the event
■ Simple event structure
Actor: Behavior → Object
Actor may be individual, group, etc.
Object can be animate (e.g., another actor)
Example: “A manager yells at a worker.”
➔
Manager is actor, yell is behavior, worker is object
■ More complex event elements
Transient states (“An angry manager yells at…”)
Traits (“...a lazy worker…”)
Settings (“…in the boardroom.”)
9
Impression Formation
■ In the context of an event, base impressions are
modified via a process of impression formation
Impressions expressed as EPA ratings
New impressions are combinations of old ratings,
including interactions between elements
■ Numerous processes, including
Association: event elements tend to take on
characteristics of other event elements; e.g., receiving a
bad act makes you seem less good
Role characteristics: event elements take on role
characteristics; e.g., being an actor tends to make you
seem more powerful/active, while being an object makes
you seem weaker/more passive
Interactions: impressions of event elements can interact;
e.g., directing powerful acts to powerful targets makes
you seem more powerful
10
Sample Interaction: Evaluative Balance
■ Evaluations of actors, behaviors, and objects are
biased towards balance
Form of generalized transitivity
Direction of evaluation should be consonant – overall
evaluation moves towards product of individual
evaluations
“The enemy of my enemy is my friend,” “Good people
help each other,” “An eye for an eye”
■ Gollub’s equation:
A’e = -0.26 + 0.39Ae + 0.48Be + 0.25BeOe
Contamination effect: Actor impression moves towards
Behavior impression
Balanced interaction: doing good things to good Objects
or bad things to bad Objects makes a more positive
impression; doing bad things to good Objects or good
things to bad Objects has the reverse effect
11
Sample Impressions in Context
Profit
Layoff
12
Impressions in Context, cont.
Worker
13
Deflection and Perceived Likelihood
■ The sum of squared changes among event
elements in a given context is called
“deflection”
Represents extent of inconsistency between base
impressions and impressions in context
■ Events with high deflection seem “wrong” or
“unlikely”
Compare with representativeness heuristic –
deflection measures extent to which in-context
impressions seem representative of typical
impressions
Example: “The firm profited from the cost
reduction.” (D=2.26, lik=36%) vs. “The manager
profited from the layoff.” (D=6.73, lik=26%)
14
Selecting Behaviors
■ Deflection also plays a role in behavioral
prediction
Given a cultural repertoire, choose the behavior and
object which minimizes the deflection of the
resulting event
A tiny bit like EU, except that we minimize deflection
rather than maximize utility
■ An implication: actors may sometimes do
things which make little sense in EU terms, but
which are affectively reasonable
Self-chastisement, for instance (Heise, 2011)
15
Example: Keeping up Appearances
■ “Imagine you are an
entrepreneur...circle the one
term which best describes how
you would prefer to be
perceived.”
General tendency to select
impressions with lower
deflection
➔
Also, bias towards impressions
with more positive evaluations
p(A>B)≈1/(1+exp(-0.04+0.32d))
■ Role framing can affect how we
present ourselves to others
May try to be strategic, but also
constrained by desire to avoid
generating deflection
16
Summary
■ Sometimes, conventional choice models are
difficult to apply
Our preferences may be ill-formed, or respondents may
not be following economic reasoning at all
■ An alternative: models of affective reasoning
Represent scenarios in terms of EPA ratings
Judgment/behavior based on integration of affective
impressions so as to minimize deflection
Can be used to make predictions, though not to make
normative recommendations
■ Next time: social influence
17
Social Influence and Network
Effects
Business Decisions
SOC 138/ECO 148
1
Social Influence
■ “No one is an island,” at least where decisions are
concerned
We receive information, take cues from others in our
environment
■ Asch conformity experiments
Subjects surrounded by alters who make clearly
inaccurate judgments
Many subjects followed alters, despite the obvious
inaccuracy
■ Milgram obedience studies
Subjects instructed to deliver increasingly painful (but
faked) shocks to another subject
So long as the orders came from an authority figure, the
majority of subjects obeyed
2
Being in the Right Place
■ Influence happens – but everyone is not equal
Some have more influence on others, some less
■ In the real world, much power and influence comes
from being in the right place
Economic and social activity is embedded within social
networks
Network position gives some actors tremendous
advantages, handicapping others
Knowing your place can give you an edge (Burt,
Krackhardt)
Understanding the network can tell you who is in a
position to influence others
3
Isolates, Cliques, Pendants, Stars, Bridges, and Hubs
4
Undifferentiated Node
Isolate
Bridge
6
Centrality
■ One dimension on which positions vary is the
extent to which they are “central”
■ Several distinct concepts, including:
Degree: number of direct ties
➔
High degree positions are influential, but also may be
subject to a great deal of influence from others
Betweenness: extent to which position serves as a
bridge
➔
High betweenness positions are associated with “broker”
or “gatekeeper” roles; can often “firewall” information flow
Closeness: extent to which position has short paths
to other positions
➔
High closeness positions can quickly distribute
information, but may have limited direct influence
7
Centrality Example
•Top 3 by Degree
Max Closeness • 1: Node 3
1 • 2: Nodes 4 and 6
• 3: Nodes 2 and 5
2 4
3 8 9 10 •Top 3 by Closeness
• 1: Nodes 4 and 6
5 6 • 2: Nodes 3 and 8
• 3: Nodes 2 and 5
7 Max Betweenness
•Top 3 by Betweenness
Max Degree • 1: Node 8
• 2: Nodes 4 and 6
• 3: Node 9
8
9
The Promise and Pitfalls of High Betweenness
■ The Promise:
Minimal redundancy
“Gatekeeper” power
Burt (1992): better
chances of promotion,
higher income
■ The Pitfalls:
Chance of being caught
in group conflict
Distrust (“marginal”
status)
Krackhardt (1999):
increased strain,
likelihood of offending
alters
10
“Why Your Friends Have More Friends Than You Do”
12
Centralization and Team Performance
■ Centralization: the extent to which one actor is more
central than all others
Stars are (in general) most centralized structures; cliques are
minimally centralized
■ Centralization, satisfaction and performance
Bavelas, Leavitt and others studied work teams with four
structural forms:
Performance generally highest in centralized groups
➔
Star, “Y” took least time, made fewest errors, used fewest
messages
Satisfaction generally highest in decentralized groups
➔
Circle > Chain > ”Y” > Star (but central persons had fun!)
A lesson: optimal performance ≠ optimal satisfaction....
13
Exchange and Power
■ When transactions occur through networks, some
positions are much more powerful than others
More powerful positions wind up with more money, status, etc.
■ Positive exchange networks (non-exclusive)
Idea: you have more power when your trading partners have
more power
Examples: gossip, exchange of favors/prestige, coalitions,
peer to peer
Strong positions: clique members
Weak positions: pendants
■ Negative exchange networks (exclusive)
Idea: you have more power when your trading partners have
less power
Examples: rare works of art, labor, design contracts, warfare
Strong positions: hubs
Weak positions: pendants
14
Exchange Network Examples
Positive Exchange Negative Exchange
Power
Less More 15
Summary
■ We naturally depend upon others for
information and behavioral cues
■ The influence we have on one another varies
according to our structural position
Different positions have more/fewer opportunities to
directly influence others, spread information, or act
as gatekeepers
■ The same is true in exchange contexts
Certain positions are more powerful
Positions vary by positive/negative type, but
pendants always lose
16
Myopia, Projection, and Static
Reasoning
Business Decisions
SOC 138/ECO 148
1
Bounded Rationality and Strategic Behavior
2
Myopia
■ For sequential games, “look forward and
reason backward” can involve a lot of steps
■ Memory is a finite resource – we just can’t look
more than a few steps ahead
DeGroot’s chess players: even grand masters only
look a few moves ahead (and very narrowly, at that)
■ Given this, players may sometimes treat finite
games as infinite
Can lead to very different behavior from perfect
rationality
3
Example: Iterated Prisoner’s Dilemma
■ In the iterated PD,
individual PD games I.
are played each round
Cooperate Defect
Infinite game:
cooperate (for 3 5
reasonable discount
rates) Cooperate
Finite game: defect 3 -1
■ Myopic play II.
Can’t see the end of the
-1 0
game, so treat as Defect
infinite 5 0
Cooperate until end is
visible – then suddenly
defect!
4
Myopia and Boundedly Rational Bubbles
■ To foreshadow next lecture’s topic a bit,
myopia can exacerbate speculative bubbles
Rational expectations: we adjust today for what (we
think) tomorrow will bring
If we are myopic, we will tend to assume that
current trends will continue forever (see also law of
small numbers)
➔
“20% growth per year, forever!”
➔
“It’s the New Economy!”
➔
"Housing prices never go down!"
■ When the end comes within sight, strategies
quickly shift – and the bubble bursts
5
Symmetric Reasoning
■ Another simplification: assume that other
players reason as you do
Greatly simplifies the problem of predicting others’
responses
Still assume other players are otherwise rational
■ Leads to strategies which are best responses
to themselves
Symmetric equilibria in pure and mixed strategies
Not always possible, but can be stable where “self-
fulfilling prophesies” abound
■ Example: yet more bubbles
6
Symmetry and Public Goods
■ A model for public goods: “N-M Chicken”
N players
M must contribute for the good to succeed
Prefer not to contribute, ceteris paribus, but will do
so if needed for the good to be provided
Chicken game – each person wants someone else
to give in and contribute
■ Many Nash equilibria, but only one symmetric
Players mix over contributing and not contributing
Equilibrium is deficient, but others cannot be
reached due to symmetry constraint
7
Static Reasoning
■ A simple heuristic: treat other players as
though they are unaffected by your own
behavior
We call this heuristic “static” in that it holds other
players’ behaviors constant
■ Simplifies the problem of optimal play…
Reduces game to a classical decision problem
■ …but produces its own problems
Relies on the assumption that other players are not
strategic actors – can lead to very poor play
8
Return to the “Beauty Pageant” Game
■ Remember this one?
Rules
➔
N players
➔
Each chooses a number in [0,100]
➔
Player whose choice is closest to half the mean choice
wins
Subgame perfect Nash equilibrium: everyone
chooses 0
Real play: start at 25, converge to 0
■ Players tend to treat others as static, respond
accordingly
■ Similar to signaling problems of art, fashion
elite, and (oddly enough) CD volume levels
It’s oh so hard to stay chic, n’est pas?
9
Projection
■ A special case of static reasoning (and of
symmetry)
First, introspect about your own behavior
Next, assume everyone else behaves as you think
you would
Given this, choose accordingly
(Thus, we “project” our own perceived tendencies
onto others, whether or not this makes sense)
➔
May be more likely where choice has strong affective
consequences
■ As Dawes points out, not always irrational…
We might be decent indicators of the population
■ …but generally so, when employed statically
Also, prediction should be regressive, sensitive to
biases, etc.
10
Exit Options and Dilemma Games
■ Research by Orbell and Dawes: providing exit
options in PD games increases the cooperation
rate
■ The reason? Projection
“Cooperators” assume others will cooperate,
anticipate getting mutual cooperation
“Defectors” assume others will defect, anticipate
getting mutual defection
Where exit is better than mutual defection,
“defectors” tend to leave and “cooperators” tend to
stay
Clearly irrational (extend the reasoning one more
step to see why), if salutary
■ “There is no [expectation of] honor among
thieves”
11
The Very Possibility of Irrationality
■ First order and second order effects of
irrationality
First order: the effects of the strategies themselves
Second order: the effects of the knowledge that
these strategies are in play
■ Even the very possibility of irrationality can
change the strategic situation
Sometimes, it is not necessary that anyone actually
be irrational, so long as rationality of all players
cannot be guaranteed
12
The Barnum Hypothesis
■ A broad family of cases: you know that you are
rational, but suspect that some others are
exploitable
“There’s a sucker born every minute” –Barnum
■ In some cases, the potential for exploitable
play may lead rational actors to play
exploitably
Rational players may think they can “trap” irrational
players (Prisoner’s Dilemma)
Putative irrational players may provide a buffer for
rational players (Hot Potato game)
■ Example: fiat currency and hot potato goods
13
The Uses of Madness
■ In some cases, the possibility of irrationality can be
used as a strategic device
■ The so-called “Nixon strategy”
Convince opponents that you have poor impulse control,
might strike out violently unless appeased (even if this
would be suicidal)
Can render otherwise incredible threats credible, force
opponents to back down
(“Feign madness, but keep your balance” – 36 Stratagems)
■ Of course, this can also backfire, if your opponent
has the same idea (or thinks you are too dangerous
to be left alone…)
It’s also hard to make credible promises once this strategy
has been invoked – who will trust a madman?
14
Summary
■ Many heuristics may be used to cope with the
problem of strategic action
Myopia – don’t look too far ahead
Symmetry – everyone thinks like me
Static reasoning – nothing I do matters
Projection – everyone acts like me
■ Not only these heuristics, but their possibility
changes the strategic landscape
If you can’t be sure your opponent is rational, you
must play very differently
■ Next time: herd behavior, bubbles, and
crashes!
15
Herd Behavior, Bubbles, and
Crashes
Business Decisions
SOC 138/ECO 148
1
Joining the Herd
■ Beliefs, behaviors in the real world are highly
correlated by proximity, mutual observability,
and interaction – why?
■ One important factor: imitation
Some imitative processes are largely cognitive in
nature (e.g., raw associative learning), but others
stem from strategic factors
■ “Herd behavior”: imitation or mimicry
stemming from informational or strategic
processes
2
Information Cascades
■ A basic model of herd behavior, first put forward by
Banerjee (1992) and Bikchandani et al. (1992)
■ Ingredients:
Multiple decision makers
One best choice among many possibilities
Each decision maker gets a private signal
Decision makers forced to act sequentially
Each decision maker can see the choices of those who have
gone before
■ Dynamic: first actors follow their signals, but later
actors throw away their own information to follow the
crowd – thus, the group as a whole underutilizes its
information
3
Example: Picking a Winner
■ Imagine I investors who must choose one from a set of
P portfolios
Only one of the P is a high performer
Each investor receives a signal as to the best option, which is
true with probability
➔
Signals randomly allocated, and 1>>1/P
Investors must move sequentially
■ Dynamic: investors follow own signals until two
choose the same option, after which everyone else
follows this choice
“Crowding” on early favorites draws later investors
On average, however, more people will lose out under this
model than if each was forced to act separately
Herding is individually rational, but collectively inefficient
4
Informational Influences
■ We have already mentioned the power of social
influence
Some influence effects are psychological, but
others are due to availability of information
(Deutsch and Gerard, 1955)
■ Herding due to informational feedback loops
(Lux, 1996; Butts, 1998)
Where actors are unaware of the extent of message
passing, they can become convinced by reflections
of their own beliefs
Particularly strong effects in dense social networks
Belief feedback mechanisms can lead to consistent
over/undervaluation of commodities relative to
fundamentals
5
Feedback in Action
Hey, I hear IBM Hmm, that
IBM’s lookin’ looks good right confirms it:
good this quarter! now. must buy IBM!
A B A B A B
C C C
6
Schelling Games
■ Thomas Schelling has proposed a general
family of cascade-like mechanisms which lead
to herd behavior
■ Basic idea:
Individuals have attributes (demographics,
performance, etc.)
Individual incentives depend upon their attribute
and the population attributes (e.g., I want to be
better than average)
Individuals can join/leave groups freely
Dynamic: individuals migrate based on population,
but in doing so they change the population
distribution, leading to yet more migration (and
more change, etc.)
■ Can lead to “tipping point” phenomena
7
Example 1: Where Have All the Students Gone?
8
Example 2: Adoption of New Technologies
0.75
R(t+1) 0.5
0.25
0
0 0.25 0.5 0.75 1
R(t)
9
Information Concealment
■ Another herding process: information
concealment
■ Basic idea:
Actors vary in, rewarded for competency
Competency revealed through decision making
Actors choose sequentially
Dynamics: later actors imitate earlier ones, so as to
conceal their own information
■ “It’s safer to be wrong with the crowd.”
Relation to herding for protection among animals
10
Example: A Tale of Two Portfolio Managers
11
Speculative Bubbles
■ Speculative bubbles occur when market prices
systematically exceed fundamental valuations
These bubbles “crash” when prices eventually
correct (or overcorrect, in some cases)
■ A puzzle: why would prices be driven
continuously upward, despite the inefficiency?
Increasingly, economists have looked to
psychological and social structural mechanisms for
these deviations from standard theory
Several (e.g., Lux, Orlean) have pointed to herd
behavior as a potential culprit
12
Bubbles and Herd Behavior
■ Herd behavior can lead to speculative bubbles
in many ways:
“Piling on” of investors on commodities which
happened to do well initially
Imitation of early/prominent movers in the market so
as to look good (“window dressing”)
Iterated feedback of positive or negative signals
regarding market performance
■ Herding can also exacerbate crashes
Once the herd changes course, things fall apart
quickly
Can be spurious (e.g., bank runs based on rumors)
13
Example: Florida Land Boom
■ Tremendous speculative bubble in 1920s
National highway system, economic prosperity
made Florida seem a viable destination
Prices for undeveloped swampland spiraled out of
control – small lots went for hundreds of thousands
Finally ended with the hurricane of ‘26
■ Numerous herding elements:
Information cascades from early successes
“Hothouse” environment of continual feedback
■ Compare to real estate and dot-com bubbles in
our own time, or the tulip manias of 17th
century Europe
14
Summary
■ Strategic pressures towards mimicry can generate
“herd behavior”
Information cascades: exploit first movers’ information
Feedback loops: the opinion you hear may be your own
Information concealment: hide your own belief by
following the crowd
Schelling games: local incentives lead to global
convergence
■ Herd behavior can create or exacerbate
bubble/crash cycles
Feedback leads to runaway overvaluation
■ Next time: organizational isomorphism and
management fads
15
Organizational Isomorphism
and Management Fads
Business Decisions
SOC 138/ECO 148
1
From Individual to Firm
■ There are various mechanisms by which
individuals come to mimic each other
(sometimes with unfortunate results)
Social influence, information cascades, etc.
■ Similar phenomena occur at the firm level
Organizational decision making affected by
structural factors, tendencies towards mimicry
■ Some mechanisms are similar, others less so
Today, we explore some of these – and some
implications for management practice
2
Neo-Institutionalism
■ Branch of theory on the border of sociology
and economics
■ Focused on how organizations actually behave
Somewhat like judgment and decision making writ
large
Emphasis on structural factors shaping
organizational practice, internal mechanisms of
pattern maintenance and change, evolution of
institutions
■ Important if you want to understand the
environment in which you will be working
3
Organizational Isomorphism
■ A basic neo-institutionalist prediction:
organizations within the same field will become
more similar over time
Called “organizational isomorphism”
“Field” constitutes a network of interacting
organizations within a given sector of the economy
(Note that perfect isomorphism isn’t predicted –
witness niche formation – only a general tendency)
■ Compare to similar results regarding mimicry
in humans
■ Many mechanisms – we’ll look at a few
4
External Pressures
■ Organizations face external pressures from
many fronts
Strategic environment may favor certain forms and
practices (e.g., Williamson’s TCE)
Regulations from state entities may mandate forms
and/or practices (e.g., EEOC)
Some practices may have value as signals (e.g.,
Oldman’s croupier switch)
■ On average, these pressures will tend to
encourage firms in similar positions to become
more similar over time
These pressures are largely “horizontal” in effect
5
Interaction and Exchange
■ Interaction and exchange can also favor
isomorphism
Trading partners require specific, uniform
arrangements (e.g., standard contracts)
Personnel exchange, combined with individual
learning, tends to spread practices across firms
(e.g., dot-com VCs)
Management consultants, management press,
business schools act to diffuse practices (e.g.,
TQM)
■ Like interactions between individuals, these
factors tend to make interacting firms more
similar
Important, since this can encourage vertical
integration
6
“Best Practices” and “Benchmarking”
■ Mimicry by firms can be quite deliberate
■ Witness the notions of “benchmarking” and
“best practices”
Basic idea behind both: identify firms which seem
to be doing well, and copy their practices
In benchmarking, a wider comparison group is
sometimes used, with more tailored choices for
each aspect to be evaluated, but the idea is the
same
■ General effect of these mechanisms over time:
increased isomorphism
A secondary effect: diffusion of practices across
fields
7
Management Fads
■ A side effect of the diffusion of practices
across firms – management fads
Collections of practices which usually promise to
“change business as we know it”
Popular for a time, then they fade away
■ Management fads can be expensive
A large firm can spend millions tooling up for the
latest “big thing”
Actual benefits often questionable
➔
By the time the dust clears, a new fad is already in place!
8
The Life History of a Management Fad
■ Management fads follow a distinct life cycle
■ Four periods (following Strang and Macy):
1. “Incubation period” – few adopters, slow growth
2. “Take-off period” – moderate number of adopters,
massive growth
3. “Ascendancy period” – many adopters, slow
growth
4. “Decline period” – few adopters, rapid decline
■ Before and after fad, essentially no adopters
are present
“Ashes to ashes, dust to dust, obscurity to
obscurity”
9
Example – Quality Circles
■ General idea: multiparty review of production
quality
■ History
Diffused through Japanese industry by mid-1960s
Long “incubation period” in US
Massive expansion of popularity during 1980s
➔
1982 survey indicated 43% penetration rate among large
US manufacturing firms
➔
60 QC consultancies listed in 1983, with 469 FTEs
Virtually no adopters remained by the early 1990s
➔
Only 5 QC consultancies listed by 1994, with 60 FTEs
■ Similar profile to “Quality of work life,” “TQM,”
“job enrichment,” “excellence,” etc.
10
11
The Strang and Macy Model
■ Work by Strang and Macy (2001) provides
insight into the dynamics of management fads
■ Mechanisms drawn from standard
observations
Firms evaluate “success stories” of innovation use
➔
“Best practices” rationale
Promising innovations are adopted
➔
Evaluations tempered by inertia and skepticism
If the firm subsequently fails to meet its
performance expectations, it drops the innovation
➔
Win-stay, lose-shift model
■ Simulation of firm populations lets us ask
“what-if” questions about innovation diffusion
and fads
12
Three Basic Results
■ Worthless innovations can spread in a
population of performance-driven firms
Superstitious learning – firms eager to get the “next
big thing” may chase after noise
■ Adaptive emulation leads to fads
Boundedly rational firms soon learn that they’ve
adopted the wrong innovation – thus, useless
innovations are dropped after an initial rise in
popularity
■ Fads are strongest when innovations have
some value
It’s harder to see through the hype when a kernel of
truth is present
13
Some General Implications
■ The “new greatest thing” probably isn’t
Many, many fads have come and gone over the
years – what are the odds that this is “the one”?
■ When evaluating new practices, use statistical
controls to reduce selection effects
Most CEOs brush their teeth in the morning, but
that’s not predictive of success
As always, Bayes’ Theorem is your guide to truth
■ You’ll probably wind up bowing to fashion
anyway
Isomorphic pressures are strong, hard to resist
Start with inexpensive pilot programs, resist the
urge to go all the way – if your fundamentals are
strong, you’ll probably be around tomorrow
14
Summary
■ Like individuals, firms tend to imitate each
other
Horizontal factors – strategic fit, regulation, etc
Vertical factors – exchange relationships, transfer of
personnel, etc.
■ Imitation can produce management fads
Even useless (and expensive) ideas can achieve
high rates of adoption
Follow from very basic mechanisms, and are
probably here to stay
■ Next time: pyramids and Ponzi schemes
15
Pyramids and Ponzi Schemes
Business Decisions
SOC 138/ECO 148
1
The Price of Human Folly, Redux
■ From the cold Antarctic wastes to the steamy
swamps of Florida, we have seen that poor
decision making can exact a heavy price
■ Today, we turn to two recurrent scams which
siphon millions of dollars each year
Both exploit herd behavior and cognitive biases to
profit from the unwary
Understanding how – and why – these scams work
will both improve your grasp of decision making in
general, and (perhaps) help keep you from being
victimized
2
Charles Ponzi and Friends
■ In late 1919, one Charles Ponzi began offering
Bostonians an investment opportunity
Notes issued, redeemable at 90 days with 50% interest
Claimed to be based on “international securities” (postal
reply coupons)
By July of 1920, Ponzi was estimated to be taking in over
$250,000 per day in Boston alone!
Ponzi hailed as “financial genius,” “the people’s financier”
■ Despite obvious problems with his story, Ponzi
wasn’t discovered and shut down until August of
1920
Thousands of people involved, millions of dollars lost, 6
banks failed, etc.
Massive payoffs based on paying out redemptions with
revenue from new issues – no postal reply coupons involved
Scheme wasn’t even new – William “520%” Miller convicted
in 1903 for similar scheme
3
How the Scheme Works
■ At time t, n notes are issued at a cost of $m
each, which are redeemable for a payout of $rm
at time t+1
r is usually large, e.g 1.1-1.5 for short time spans
■ The total payout of $rmn is obtained by selling
more notes before t+1 arrives, and using the
new revenue to pay the original investors
New sales needed given by rmn/m=rn
Thus, sales must grow exponentially in r in order for
the scheme to stay viable – this quickly gets out of
hand
4
Example of a Ponzi Scheme
Notes Issued Notes Redeemed Debt Going Forward
Week (@$100) (@$150) Revenues ($) Payouts ($) ($)
1 50 5,000 7,500
2 75 50 7,500 7,500 11,250
3 113 75 11,300 11,250 16,950
4 170 113 17,000 16,950 25,500
8 863 575 86,300 86,250 129,450
12 4,373 2,915 437,300 437,250 655,950
16 22,140 14,760 2,214,000 2,214,000 3,321,000
20 112,085 74,723 11,208,500 11,208,450 16,812,750
24 567,432 378,288 56,743,200 56,743,200 85,114,800
52 48,358,104,057 32,238,736,038 4,835,810,405,700 4,835,810,405,700 7,253,715,608,550
5
Why Is It So Persuasive?
■ Ponzi and Miller’s operations exploited several
herd behavior mechanisms
Information cascades: long lines outside building,
placement of receiving desks behind redemption
desks (for Miller)
Information feedback loops: planted stories (for
Miller), “hothouse” rumor mill
■ Cognitively, the schemes exploited basic
weaknesses
Law of small numbers: short run success
extrapolated to long-term profitability
Difficulty of nonlinear extrapolation: hard to see
how unsustainable the profit growth would have to
be
6
Aside: Exponential Growth and COVID-19
■ Early phases of an
epidemic display
exponential growth, too
■ Implication: easy for an
outbreak to "sneak up"
on decision makers
Transition from few/almost
no cases to too many can be
very sharp
Risk of health systems being
overwhelmed
8
The Power of the Pyramid
■ Like Ponzi schemes, but based on a static
system of redistribution
“Tree-like” structure with most recent recruits each
bringing in new members
New recruits must pay earlier recruits, often several
levels up the tree
■ Unlike Ponzi schemes, structure is usually
public
In some cases, an obfuscatory element may be added
(e.g., a side business, in the case of MLM)
■ Began with chain letters in mid-1930s
With tougher mail fraud enforcement, pyramids moved to
face-to-face gatherings and (recently) the net
Large surge in 1980s, but continue to be popular today
(MAKE MONEY FAST, anyone)?
9
How it Works
■ A simple example
scheme:
Each member at level i
20=1 $
recruits n new members,
who each pay $m to their
i-jth “parent”
21=2
If it works, each member
gets $mnj…
…but system needs ni
recruits at each level to 22=4
keep growing
■ Exponential growth
dooms the system
Only the earliest 23=8
members have any real
chance of getting a
payoff 10
Pyramid Persuasion
■ Strongest common factor: difficulties in
extrapolation
Usual problems with exponential growth
Difficulty compounded by lack of understanding of
network constraints: your contacts have probably
been recruited already by someone else!
■ Many herding factors also exploited
Information feedback, information cascades both
common
In some cases, information concealment occurs
when mid-level members hide their recruiting
problems to avoid driving away new members –
ostentatious displays of wealth common for this
reason
11
Capitalizing on Sociability
■ Another key element of most pyramid schemes
is that they exploit sociability
■ Chain letters
Strong language of guilt and obligation
Urged to send to friends and family members
■ In-person schemes
Again, friends and family recruited
Shared rituals (e.g. “money-humming”), “mentoring” of
new recruits in MLM, prestige and status conferred on
successful (i.e., senior) members
Failure often blamed on early exit – seen as personal
weakness, lack of commitment or tenacity
12
Summary
■ Pyramids and Ponzi schemes both demonstrate
how individual decision making can be
manipulated to tremendous (and ultimately
deleterious) effect
Both strategic and cognitive elements involved
Understanding how these schemes work can help
you avoid being victimized
■ Ultimately, these cases illustrate the basic
message of this class:
All of us are fallible decision makers
Understanding how we make decisions can help us
avoid errors and achieve our goals
■ Next time: the big wrap-up!
13
Rational Decisions and Real
Decisions
Business Decisions
SOC 138/ECO 149
1
Looking Back on Where We’ve Been
Part 1: Models of Rational
and Irrational Decision
Making Behavior
Part 2: Pathologies in
Individual Decision
Making
Part 3: Decision Making in
Social Contexts
2
What Makes a Decision Rational?
We began the course with a consideration of
what went into a rational decision
Decision structure
The axioms of choice
Probability theory and Bayes’ Theorem
Game theory
These tools can be used to identify errors in
others’ decisions, or to make your own
decisions more rational
3
Frames and Psychophysics
We talked about how we naturally tend to make
decisions, and some vulnerabilities of this
approach
Gain/loss frames
Loss aversion
Risk perception: dread, knowability, and scale
Not only do these models allow us to better
predict how people will respond to real-world
opportunities, but they also give us insight into
why people react as they do
Very important in a country concerned about COVID,
crime, climate change, and myriad other threats
4
Heuristics and Biases
We learned about the rules of thumb we use to
make “intuitive” judgments – and where they
go wrong
Representativeness
Satisficing
Win stay, Lose shift
Static reasoning
These ideas have many marketing applications,
and can also be useful in your daily life
It’s always useful to know when your rule of thumb is
going to be more of a hindrance than a help
5
Fallacies of Judgment
We uncovered patterns of fallacious reasoning,
and identified some of the problems caused
thereby
Confirmatory search and pseudodiagnosticity
The gambler’s fallacy
Failure to understand the regression effect
The law of small numbers
These errors in reasoning can be costly, but
training will help you avoid them
Don’t become a Horace Secrist, or fall for a quack
“regression effect” therapy
6
Illusions of Control and Expertise
We saw many ways in which we may fall prey to
illusions about our own abilities
Spurious pattern recognition
Post-hoc probability assessment
Illusions of efficacy
Selection effects
Affects everything from the viability of technical
trading to the careers of professional psychics
Learning to be skeptical of others – and yourself –
may save you money and embarrassment down the
road
7
How Can We Make (More) Rational Decisions?
8
The Animal Inside of Us
We considered many of the fundamental human
attributes which affect our decision making
behavior
Association formation and learning
Memory
Expectancy and selective perception
Many uses in marketing, criminal justice, politics
If you need to shape opinion or to measure it, these
are issues with which you will have to grapple
9
How Do We Feel about Others?
We studied theories of affective reasoning and
attribution
Saliency, and attributions of causal importance
The Fundamental Attribution Error
Attributions of competence
Impression formation, EPA ratings, and ACT
Anyone who’s ever wanted (or needed) to
understand how to spin a headline needs this
stuff
Also, wear bright shirts, sit in front, and ask all the
questions
10
How Context Shapes Opportunity
We have discussed the impact of structural
factors on opportunity, and thereby decisions
Contacts and the job search
Centrality
Hubs, bridges, and pendants
Cliques and cults
Knowing which positions are influential/powerful
helps you make the most of your own chances
It also tells you where you need to be – or avoid being
11
Collective Decision Making Traps, and How to Avoid
Them
12
Management Fads and Isomorphism
We have seen how organizational decision
making can be shaped by factors which are
sometimes less than rational
Isomorphism pressure
“Best practices” and “benchmarking”
Management fads
Hopefully, you have become a bit more wary of
buying into the “next big thing”
You’ve also learned something about how fads can be
spread, for those who plan on writing the next
Theory Z
13
Scams and Cons
Finally, we have identified many scams and
confidence games over the quarter which prey
upon human limitations
Professional “psychics” and cold reading
The stock market selection scam
Pyramids and Ponzi schemes
Knowing how these games work may keep you
from being exploited
But if any of you wind up on the other side of these
scams, please don’t say you learned it here….
14
A Last Word
From a consultant whose book has
been a bestseller for over 2500
years:
“With many calculations, one can win.
With a few, one cannot. How small a
chance has he who makes none at
all?” – Sun Tzu
You now have the raw material for
better decision making – use it in
your own calculations, and reap the
rewards 15