You are on page 1of 125

Copyright Presh Talwalkar

About The Author


Presh Talwalkar studied Economics and Mathematics at Stanford
University. His site Mind Your Decisions has blog posts and
original videos about math that have been viewed millions of
times.
Books By Presh Talwalkar
The Joy of Game Theory: An Introduction to Strategic
Thinking. Game Theory is the study of interactive decision-
making, situations where the choice of each person influences
the outcome for the group. This book is an innovative approach
to game theory that explains strategic games and shows how you
can make better decisions by changing the game.
Math Puzzles Volume 1: Classic Riddles And Brain Teasers
In Counting, Geometry, Probability, And Game Theory. This
book contains 70 interesting brain-teasers.
Math Puzzles Volume 2: More Riddles And Brain Teasers In
Counting, Geometry, Probability, And Game Theory. This is
a follow-up puzzle book with more delightful problems.
Math Puzzles Volume 3: Even More Riddles And Brain
Teasers In Geometry, Logic, Number Theory, And
Probability. This is the third in the series with 70 more
problems.
But I only got the soup! This fun book discusses the
mathematics of splitting the bill fairly.
40 Paradoxes in Logic, Probability, and Game Theory. Is it
ever logically correct to ask “May I disturb you?” How can a
football team be ranked 6th or worse in several polls, but end up
as 5th overall when the polls are averaged? These are a few of
the thought-provoking paradoxes covered in the book.
Multiply By Lines. It is possible to multiply large numbers
simply by drawing lines and counting intersections. Some people
call it “how the Japanese multiply” or “Chinese stick
multiplication.” This book is a reference guide for how to do the
method and why it works.
The Best Mental Math Tricks. Can you multiply 97 by 96 in
your head? Or can you figure out the day of the week when you
are given a date? This book is a collection of methods that will
help you solve math problems in your head and make you look
like a genius.
The Irrationality Illusion: How To Make Smart Decisions
And Overcome Bias. This handbook explains the many ways
we are biased about decision-making and offers techniques to
make smart decisions. The biases of behavioral economics are
like optical illusions: while we fall for them every time, we can
also learn to recognize the patterns and see through the tricks.
Fool me once, shame on you. Fool me twice…you won’t get
fooled again after reading this book.
Table of Contents
1. You Can’t Get Fooled Again
2. The Relative Trap: Be Absolute

Percent Savings Fallacy

Anchoring

Framing Effect

Unit Effect

Denomination Effect

Discounts Versus Bonus Quantities

Sounds Misleading

Endowment Effect

Loss Aversion

Zero Risk Bias


3. Losing Track: Stay On Target

Untracked Spending

Too Many Goals

Commitment Device

Flat Rate Bias

Decoy Effect

Pre-Order Bias

Present Bias

Opportunity Costs

The Value Of Your Time


Salami Tactics

Change Blindness
4. Uncertain About Uncertainty: Calculate The Probability

Conjunction Fallacy

Selection Bias

Survivorship Bias

Confirmation Bias

Coincidences

False Positive Bias

Disagreeing Experts

Base Rate Fallacy

Excessive Risk Aversion

Winner’s Curse
5. Consistency And Social Norms: Question The Rules

The Wrong Average

Priming Effect

Expensive Expectation

Sunk Cost Fallacy

Money Comparisons

Untested Preferences

Speeding

Volunteer’s Dilemma
Conclusion
1. You Can’t Get Fooled Again
Traditional economics characterizes people as rational agents,
meaning they have logical preferences, they account for risk
using probability, and they maximize their utility. Behavioral
economics points out people are not well described by the
rational agent model. Many times people have inconsistent
preferences, they overreact to risk, and they act against
individual interest. The systematic deviations from the rational
agent model are known as cognitive biases, decision traps,
decision fallacies, or irrational decisions.
People often argue which side is “right.” In this book, I want to
bridge the gap by pointing out both perspectives are useful. It is
true cognitive biases can lead to bad decisions. However, we can
learn to do better. We can recognize the biases and then train
ourselves to make better decisions.
To see what I mean, consider the following graphic.

How would you describe the horizontal lines? At first glance you
would say the horizontal lines are curved and skewed. But look
closely and measure the horizontal lines against a straight edge.
In reality, the horizontal lines are all straight and parallel to each
other. This is the café wall illusion.
In a way, every time you see this illusion, you will get tricked.
The brain misperceives this arrangement of lines and shapes.
But in another way, you will never be tricked again. Your brain
can recognize this pattern of lines and rectangles as an optical
illusion. If you are shown this illusion again, you would be on
guard for it. You would correct your bias and realize the
horizontal lines are straight and parallel.
Cognitive biases are like optical illusions. They exist and we
tend to get tricked by them. By the same token, we can learn
never to be tricked again. Just as we can recognize optical
illusions and differentiate perception from reality, we can be alert
for cognitive biases and distinguish between a decision trap and
a rational choice.
Daniel Kahneman is one economist who has been recognized
with a Nobel Prize for his research into behavioral economics.
Kahneman characterizes our brain as having two modes of
thinking [1]. System I is “fast” thinking that relates to instincts
and gut reactions. System II is “slow” thinking that relates to
deliberate and logical plans. When we see an optical illusion, we
get fooled by the automatic and instinctual response of system I
thinking. When we study the illusion closer using system II
thinking and realize the pattern is misleading, we can overcome
the bias.
There is an old saying: “Fool me once, shame on you. Fool me
twice, shame on me.” While visiting Tennessee in 2002,
President George W. Bush had a momentary memory lapse. He
ended up uttering: “There’s an old saying in Tennessee — I
know it’s in Texas, probably in Tennessee — that says, ‘Fool me
once, shame on…shame on you. Fool me — you can’t get fooled
again.’”
Everyone made fun of the gaffe; they called it another
“Bushism.” But perhaps W. Bush was making sense after all. If
you get fooled once, then shame on the other person for
exploiting the cognitive bias. Then, at that point, you can
recognize the game. If someone tries to fool you again, they
simply can’t! You have identified the cognitive bias and learned
the rational response.
Behavioral biases are similar to tourist traps. While sightseeing a
new city, you may end up in crowded, overpriced shops selling
junk. But you only need to fall for a tourist trap once. The next
time you will avoid the area. Nowadays you can avoid the tourist
traps entirely by planning with Yelp, TripAdvisor, or asking
friends. Similarly, you are likely to fall for a behavioral bias the
first time you encounter a new decision. But nowadays you can
learn about the biases in advance (by, say, reading this book) so
you can avoid making a bad decision in the first place.
The book is a practical guide to overcoming common behavioral
biases. In each section, I will describe common mistakes and
why they happen. Then I will suggest techniques to help you
make the smart decision. Over time you will recognize the
cognitive biases as naturally as you can spot optical illusions.
You won’t get fooled again.
A thought experiment
Before I get into the biases, I want to give an example to
illustrate the difference between rational agent theory and
behavioral economics.
Imagine you’re in a psychology experiment about decision-
making. You can select one of the choices.
(A) $100 prize now
(B) $200 prize if you wait 15 minutes.
What do you do? I think almost everyone would select (B) $200
in 15 minutes. And universally we would agree that is the smart
decision.
Now imagine the experiment is framed differently. Imagine you
are a 5-year old and a marshmallow is put in front of you. Which
choice do you select?
(A) Eat a single marshmallow now
(B) Earn a 2nd marshmallow by waiting for 15 minutes.
Again, the answer choice is obvious. Logically choice (B) is
better again as you can double the reward by waiting for 15
minutes.
But here’s the thing: that’s not what most children actually do.
The Stanford marshmallow experiment found that only about 30
percent of the children waited for 15 minutes [2]. A follow-up
study more than 10 years later suggested good things come to
those who can wait. The children who delayed gratification were
found to have higher educational achievement and a lower body
mass index.
Does that mean (B) is the correct answer? That choice (B) is the
rational thing to do?
This is where the story gets slightly complicated by the history
of economics.
How rational became a bad word
Let’s return to the first experiment. You can either (A) claim a
$100 prize now, or (B) claim a $200 prize if you wait 15
minutes.
Pretty much everyone would say (B) is the better choice. You
double your money just by waiting for a few minutes.
But think about variations of the experiment. Imagine you had to
wait 1 year to claim the $200 prize. Is it still worth it? What if
you had to wait 2 years, or 5 years, or 10 years?
When you think about it, the experiment is really about the
tradeoff between a current reward and a future reward. What is
the correct way to evaluate time versus money?
Traditional economics developed the rational agent model to
answer this and other questions. The rational agent model proved
useful. By quantifying costs, benefits, time, and the rules for
evaluating tradeoffs, it was possible to mathematically solve for
the choice that gave the highest reward. In other words, it was
possible to solve for the correct choice under the assumption of
rationality.
Furthermore, there was hope economics could also predict how
markets would behave by considering the interaction of many
rational agents.
This was probably taking the theory too far. Humans do not
always act according to the rational agent model. In fact, there
are often good reasons we act differently.
Let’s return to the marshmallow experiment. You can either (A)
eat the marshmallow now, or (B) earn a 2nd marshmallow by
waiting for 15 minutes.
An appropriately specified rational agent model would conclude
(B) is the correct choice. You get double the marshmallows and
you only have to wait for a few minutes.
Now here’s what it gets interesting: most kids don’t make the
rational decision! The Stanford marshmallow experiment found
that only about 30 percent were able to delay gratification.
Why do 70 percent make choice (A) instead? Is there a reason
we are “predictably irrational”?
The field of behavioral economics considers the psychological
reasons and cognitive factors for decision-making. Many times
the decisions we make are not well described by the rational
agent model.
A behavioral economist might point out children are only
boundedly rational: some may not yet realize that waiting 15
minutes for double the reward is a good tradeoff, or others might
not have the willpower to execute the plan. Or the behavioral
economist might say the children are satisfycing: they actually
don’t want 2 marshmallows to maximize their payout; they
would be happy eating 1 marshmallow so that is the optimal
bliss point. Isn’t it a good thing if kids eat less sugar anyway?
The behavioral economist might also suggest the experiment
framing was important: the presentation of the marshmallow
choice may have primed them to seek immediate gratification.
Perhaps the children would act differently if the choice was
about money or presented in another fashion. There may be
many other explanations for the behavior as well.
The behavioral economics observations may be accurate for how
we make decisions. And it might also be useful for predicting
what people do. The rational agent model of waiting 15 minutes
was chosen by only 30 percent; a model that suggests children
take 1 marshmallow would have a 70 percent success rate.
You can see why behavioral economics might be favored over
the rational agent theory. I suspect behavioral economics also has
popular appeal because people can relate to its explanations. Plus
behavioral economics can be comforting: isn’t it nice to hear
other people make bad mistakes too? It’s like saying it’s okay to
take the single marshmallow: it’s not really a good idea, but the
70 percent want to hear it.
Behavioral economics is the subject of best-selling books and its
ideas are also shaping policy as the UK and USA governments
want to “nudge” people to select healthy foods, reduce energy
consumption, and save for retirement.
And in the marketing of these books and ideas, people have
somehow gotten to the point they think rational is a bad word.
Be rational
You know better. Most of America would pick the single
marshmallow. But the popular decision is not always the rational
one.
The 30 percent who could wait made the rational decision. They
were not necessarily smarter or genetically superior individuals.
It turns out many of them were tempted to take the 1
marshmallow too. But they were able to make the rational choice
because they had a strategy. They either understood the tradeoff
logically, or they employed strategies like distracting themselves
so waiting was easier. They recognized the bias and overcame it.
That’s the idea of this book. While most of us do not make the
rational decision automatically, we can identify situations where
we make mistakes and develop strategies to make the smart
decision.
Let’s consider how most of us are with physical health. By
nature most of us opt for junk foods and do not do exercise. You
could say we are “irrational” about health. But a few of us are
exceptional physically. Professional athletes know there are ways
to become fit. They follow strict exercise routines and diet
regimens that help them build muscle, gain speed, and shed body
fat.
Similarly, by nature we tend to make some decisions that are not
rational. But we don’t have to settle for that. If we understand the
possible ways we make bad decisions, we can figure out ways to
counteract them. Over time we can practice to make smarter
decisions and that becomes a habit.
To use another analogy, the brain is like a software operating
system. When you buy a computer or a phone, the device
contains software to help you accomplish various goals. But the
software also contains security bugs that hackers can learn and
exploit. Periodically the software has to be updated with security
patches. The human brain similarly is “pre-loaded” with tools to
help you accomplish various goals. But the brain misperceives
some stimuli which others can learn and exploit. The brain needs
to be updated with new information to avoid falling for cognitive
biases.
In this book I want to help you make smart decisions. I will go
over many cognitive biases and then give tips on how to
overcome them and see through the irrationality illusion.
Notes
[1] This is from Daniel Kahneman’s 2011 bestselling book
Thinking, Fast and Slow.
[2] Walter Mischel’s book The Marshmallow Experiment
explains the background and results. The original experiment
had a reward of a pretzel, cookie, or marshmallow, but the name
“marshmallow experiment” has stuck. Mischel, Walter, Ebbe B.
Ebbesen, and Antonette Raskoff Zeiss. “Cognitive and
Attentional Mechanisms in Delay of Gratification.” Journal of
Personality and Social Psychology 21.2 (1972): 204.
2. The Relative Trap: Be
Absolute
Let’s start out with a quiz. Which of the lines is longer?

How about this: which circle is larger, the middle circle in the
left arrangement or the middle circle in the right arrangement?

At first glance, line A appears to be longer. But you can measure


the lines: they are actually the same length. This is the Müller-
Lyer illusion.
Similarly, at first glance, the circle in the right arrangement
appears to be larger. Again you can measure it: the two circles
are actually the same size. This is the Ebbinghaus illusion.
In each illusion, we are judging size relative to its surroundings
and we get tricked by extraneous details.
You can see how a business might employ this optical illusion.
They can make a product appear larger in size, and charge more
for it, by manipulating its surroundings.
There are a variety of biases in behavioral economics that relate
to how we misperceive size based on relative surroundings.
In each case the solution is to think about the absolute size. Just
as we cover the surroundings in the optical illusions to see the
true measurement, we can mentally block out extraneous factors
to make a rational decision based on the relevant details.
Percent Savings Fallacy
The illusion: We judge the size of savings relative to our
purchase. We are more likely to expend effort to save $5 on a
$15 purchase than to save $5 on a $125 purchase.
The rational response: Ignore the purchase price and focus on
the savings as an absolute amount. Think about what the savings
could buy in absolute terms, like how $5 could buy lunch the
next day.
Examples
I was at the hardware store to buy a few things. I remembered I
needed some fluorescent bulbs, which I found were $10 each. I
did a quick search on my phone and saw the hardware store 5
minutes away had similar generic bulbs for $9. Was it worth it to
go to the other store? I decided the extra trip was not worth
saving $1.
The next day I was shopping for groceries. I needed eggs and
was disappointed the price had gone up to $3 a dozen. I knew
from ad papers that a store 5 minutes away had eggs on sale for
$2. I instinctively decided saving $1 was worth it and made the
trip to the other store.
In each case the decision felt correct. But my actions were
inconsistent. In both cases I had the chance to save $1 by going
to a nearby store. Why was I more willing to save money on the
eggs than on the light bulbs?
My mistake was viewing savings as a percentage of my
purchase. The $1 on the eggs was a 33% savings, which felt like
a lot, whereas the $1 on the light bulbs was a 10% savings,
which felt small. The purchase price acted like the surrounding
lines on the Müller-Lyer illusion or the surrounding circles of the
Ebbinghaus illusion: it made me forget that the $1 of savings
was the same absolute size in both cases.
Research has demonstrated people fall for this trap. In one
experiment, Daniel Kahneman and Amos Tversky asked if
people were willing to make an extra 20 minute trip to save $5
on a calculator [1]. When they said the calculator was priced at
$15, they found that 68 percent were willing to make the trip.
When they said the calculator was $125, they found only 29
percent would make the trip. In each case the participant would
be saving $5 on a 20 minute trip, but fewer people were willing
to make the trip for a higher base price. It seems people were
judging the savings on the percentage of the total purchase rather
than as an absolute amount.
Salespeople may use this bias to their advantage. When my
friend was buying a car, he had negotiated to nearly the price he
wanted, but they were $50 apart. The salesperson would not
budge and my friend decided he would walk away. The
salesperson chased after him and asked, “Are you really going to
give up on the car you want just for $50?” My friend replied that
he would walk away. After all, $50 is a lot of money; it was the
difference between staying home and his wife out to dinner.
Besides if $50 was not a big amount, why wouldn’t the
salesperson budge? Was he willing to lose a sale over $50?
My friend avoided the percentage savings fallacy by focusing on
the absolute dollar value. He then flipped the table and made the
salesperson think about the percentage savings! The salesperson
did agree to reduce the price by $50 and my friend bought the
car.
Note
[1] Tversky, Amos, and Daniel Kahneman. “The Framing of
Decisions and the Psychology of Choice.” Science 211.4481
(1981): 453-458.
Anchoring
The illusion: Numbers we see or hear first act as anchor points
from which we adjust in further calculations. Even seemingly
irrelevant numbers can influence how we make calculations.
The rational response: To limit the impact of anchor points,
think about the other extreme of numbers and make another
calculation. To use anchoring against others, be the one to set the
initial number.
Examples
Here is a math problem that was asked in an experiment [1].
Some people were asked to calculate the value of
1×2×3×4×5×6×7×8. Other people were asked to calculate
8×7×6×5×4×3×2×1. They had to answer in 5 seconds, which
meant they had to estimate the value of the answer.
Both expressions have the same mathematical value, so you
might think both groups would estimate about the same value.
Surprisingly, the group given the leading digit 8 gave a median
estimate of 2,250 compared to the other group whose median
estimate was 512.
The order the numbers were presented evidently influenced the
value of the estimate: the larger initial number lead to a higher
estimate. Also, the correct answer is 40,320, so both groups gave
poor estimates overall.
Numbers we see or hear initially can set an anchor point from
which we adjust our calculations. Anchoring comes into play in
most shopping and bargaining situations.
Clothing stores use anchoring when they list a retail price along
with the sale price. When a shirt that retails for $100 is
discounted to $50, it seems like a good deal. If the same shirt
retailed for $75 and was priced at $50, you may not think the
lower price is as good of a deal. The retail price sets an anchor
from which you evaluate the sale.
Car dealerships use anchoring multiple times. Cars are listed
with sticker prices that establish high anchor prices. Salespeople
then ask buyers to make an offer and write it down. Buyers tend
to write higher prices, and at that point, the offer is an anchor
point for the bargaining process. The buyer is unlikely to get a
lower price—the salesperson gets the buyer to commit to a price
around the initial offer.
Grocery stores use anchoring in subtle ways too. Stores around
me often advertise “10 for $10” items when they mean you can
buy 1 item for $1. Why does the store advertise the price for 10
items instead of saying each item is $1? Perhaps the store is
anchoring to the number 10 so that shoppers will tend to buy
more items. The store might also advertise the item as $1 but
with a limit of 10. The limit of 10 also acts as an anchor and
research finds it tends to get shoppers to buy more [2].
It is not practical to avoid anchoring entirely since we are
bombarded by numbers. However, you can reduce the effect by
thinking about the opposite number and the absolute merit of the
purchase. In a clothing store I use the anchor of not buying
anything at all ($0), or I think about how much the item is in a
big box retailer. Socks on sale for $10 from $20 sound good, but
if I can get usable socks for $5 elsewhere then the price is not
good. In the grocery store, if I see a deal that is $10 for $10, I
remind myself I can also spend $0 for 0 items. Many times I
realize I do not need the item at all.
At the car dealership you might try to use anchoring in your
favor. Set a price that is below the car’s invoice price. If you are
financing, see what kind of loan rate your bank would offer. If
you set the initial numbers aggressively, either they have to meet
those values or you can walk away.
Notes
[1] Tversky, Amos, and Daniel Kahneman. “Judgment Under
Uncertainty: Heuristics and Biases.” Science 185.4157 (1974):
1124-1131.
[2] Wansink, Brian, Robert J. Kent, and Steve Hoch. “An
Anchoring and Adjustment Model of Purchase Quantity
Decisions.” Journal of Marketing Research 1998 (1998): 71-81.
http://foodpsychology.cornell.edu/pdf/permission/1990-
2000/Anchoring-JMR-1998.pdf
Framing Effect
The illusion: We respond to a choice more favorably if it is
presented in a positive manner than in a negative manner. We are
more likely to respond to guaranteed gains versus guaranteed
losses and probabalistic losses versus probabilistic gains.
The rational response: Consider the pros and cons of a choice
to avoid being swayed by one side alone.
Examples
Consider the following bizarre situation. Your friend offers you a
bet. He will try to predict the result of 20 coin flips in a row. If
he guesses incorrectly on any flip, then you win $5. However, if
he guesses all 20 correct, then you will jump from your 3rd floor
apartment balcony. Are you willing to take the chance?
Now consider another situation. Your friend has a craving for
food from a restaurant 10 miles away. Would you drive there and
bring back food for him? He offers up a $5 tip for your help.
I doubt very many people would accept the first bet. Why risk a
very slim chance of injury from jumping off a balcony for a
measly $5?
On the other hand, most of us have picked up food for a friend,
even for free.
But if you think about it, the two situations are not so different.
In both situations you have a high chance of getting $5. In fact,
the risk of injury or death from driving is likely much higher
than the 1 in a million chance your friend guesses all the coin
flips correctly.
The framing effect states that our decisions can be influenced by
the method in which options are presented to us. The coin flip
game emphasizes the risk of injury. The food pick-up involves
driving, and even though we know driving carries risk of injury,
the detail is not salient when we make the choice.
In fact, the framing effect happens even when the two choices
involve exactly the same consequences.
How much pasta would you eat? Perhaps it depends on how the
food is presented to you. In one study, participants were served
pasta in different sizes [1]. The twist is that some people were
told a 2-cup portion was “Regular” while others were told it was
a “Double-size.” The amount of pasta was exactly the same, but
people evidently used the label as a guide to how much they
should eat. People told they were getting a “Double-sized”
portion ate much less, leaving 10 times as much food as those
who had a “Regular” portion.
To avoid the framing effect with food, I focus on absolute
numbers. I count calories in my food using a kitchen weighing
scale and package nutrition information. If I eat 500 calories for
lunch, I aim to get that amount when I dine out. I will read
restaurant nutrition information in advance if possible. I might
still overeat when I’m at a party or at a new restaurant while
traveling, but I avoid mistakes most of the time.
The framing effect can also influence whether we save for
retirement. Some companies offer 401(k) plans that offer tax-
advantaged retirement savings for employees. When the plan is
presented as an option to enroll (opt-in), people think the plan is
optional and they are less likely to sign up. When the plan is
presented as an option not to enroll (opt-out), people think the
plan is the norm and they are more likely to sign up [2].
To minimize the framing effect with risky decisions, you can list
the pros and cons of each choice. For example, let’s say you buy
a new phone and are offered insurance. The sales material will
emphasize the protection and the peace of mind for a low
monthly cost. You can re-frame the situation by thinking about
the hassle of filing a claim, the total cost over a year or two
years, and the likely eventuality that you would rather buy a new
phone in a year or two anyway. You might feel differently when
you consider the facts from both perspectives instead of
considering only the way the choice was initially framed.
Notes
[1] Just, David R., and Brian Wansink. “One Man’s Tall is
Another Man’s Small: How the Framing of Portion Size
Influences Food Choice.” Health Economics 23.7 (2014): 776-
791.
[2] There are a number of studies that show automatic
enrollment increases participation in 401(k) plans; the following
study showed the effect would exist even without a company
match of contributions. John Beshears; James J. Choi; David
Laibson; Brigitte C. Madrian. “The Impact of Employer
Matching on Savings Plan Participation Under Automatic
Enrollment.” Research Findings in the Economics of Aging.
University of Chicago Press, 2010. 311-327.
Unit Effect
The illusion: We perceive quantities as larger when written in
units that give larger numbers.
The rational response: Think about the quantity in other units
and consider the total amount.
Examples
Many people are willing to pay a $5 monthly cable modem
lease, a 1% investment advisory fee on a $100,000 portfolio, and
use an airlines reward card that gives 1 point per dollar spent and
redeemable for 60 points per reward dollar.
But they would be less enthusiastic about paying $60 a year for a
cable modem lease, paying a $1,000 fee annually for investment
advice, and using a reward card that gives 1.67% cash back.
If you do the conversions, the options are the same in each case
but presented with a different unit or time scale.
Researchers have found people associate larger numbers with
bigger quantities [1]. The unit effect is the bias that changing the
units of a measurement can influence how we judge size.
Monthly costs seem smaller than the equivalent annual amount,
percentage fees seem smaller than the absolute amount of the
fee, and credit card rewards expressed in points seem larger than
the percentage cash back.
The good news is researchers also found a way to reduce the
bias: think about converting the item into other units. One group
had a stronger preference for expedited shipping when the
delivery time was told as 31 days as opposed to 1 month.
Another group did a preparatory activity to convert between time
periods so they were thinking about how measurements in
different units can be equivalent. This group showed the same
preference for expedited shipping for 31 days and 1 month,
suggesting the group realized the time scales were the same.
When offered a subscription, think about the cost in other time
scales to reduce the unit effect.
Note
[1] Pandelaere, Mario, Barbara Briers, and Christophe
Lembregts. “How to Make a 29% Increase Look Bigger: The
Unit Effect in Option Comparisons.” Journal of Consumer
Research 38.2 (2011): 308-322. http://www.ejcr.org/Curations-
PDFs/Curations3/Pandelaere_Briers_Lembregts.pdf
Denomination Effect
The illusion: We are hesitant to spend money if it means
“breaking” a high value currency note, and similarly we are
more likely to spend money if we are carrying smaller
denomination notes.
The rational response: Focus on whether the purchase makes
sense: whether you really need the item now, whether the cost is
reasonable, and whether you could shop around for a better
price.
Examples
It’s an old idea that if you carry large bills you will be hesitant to
“break” the note, meaning that carrying large bills could assist
with spending self-control.
This denomination effect has been observed experimentally [1].
Researchers went to a gas station and found participants willing
to answer a short survey. As a reward, they were given $5, either
in the form of a single $5 bill or as 5 single $1 bills. They were
told they could use the money to buy something at the gas
station convenience store.
The presentation of the reward made a difference: while 24
percent given 5 single $1 bills decided to make a purchase, only
16 percent given the $5 note made a purchase. In other words,
about 50 percent more of the group given single $1 bills
ultimately decided to spend their reward.
It would seem that carrying large bills might be a smart idea for
spending control. However, there is a small catch.
While people with small bills were more likely to spend than not
spend at all, the people with large bills who did spend ended up
spending about 20 percent more. If you are trying to save money,
then your mistake of spending would be more harmful if you
were carrying a large bill.
I interpret the study to mean carrying large bills is a double-
edged sword: you might be less likely to spend, but if you lose
control you might splurge more.
Credit cards and gift cards offer the ease of spending similar to
small notes while also offering the spending capacity of using a
large note. This logic suggests using electronic payments could
encourage extra spending, which may contribute to the
prevalence of credit card debt. Some people prefer to use cash
only as a means of spending control.
The study applies most directly for impulse purchases, say if
you’re going out to enjoy nightlife or you’re on vacation. There
might be some benefit to carrying large bills to reduce the
impulse to spend. However, you have to be diligent so you do
not actually spend and then splurge. Carrying large notes also
comes with the risk of having the money lost or stolen.
Most of my daily purchases are planned and I will shop around
before I get to the physical store or add an item to my online
cart. When I am on vacation, I will keep to a budget set before
the trip is made and use whatever combination of credit, small
and large bills is suitable for the venue. In other words, I try to
keep to the principle of buying items based on their cost and
value rather than letting my ability to pay influence the decision
too much.
Note
[1] Raghubir, Priya, and Joydeep Srivastava. “The Denomination
Effect.” Journal of Consumer Research 36.4 (2009): 701-713.
Discounts Versus Bonus Quantities
The illusion: We perceive bonus quantities as a larger unit cost
reduction than a similar price discount.
The rational response: Calculate the effect on unit cost and
compare.
Examples
How smart of a shopper are you? Let’s do a simple quiz.
You’re at the store and your favorite salsa is on sale in two
different ways. Which offer has a cheaper unit-price: a discount
of 33 percent, or getting 50 percent more at the same price?
Time is up! You were in a rush and hurriedly placed one of the
jars into your cart. Which one did you pick?
The right answer is…it actually doesn’t matter. The two sales
reduce the unit price by the same amount, meaning either choice
is justifiable.
And yet consumers have a strong preference for one of the sales.
Many studies have found that people prefer bonus quantities
over discounts. Behavioral economics posits that bonus
quantities are seen as a gain in quantity which is valued more
than a reduction in loss from a discount [1].
There are a couple of psychological reasons we may prefer the
bonuses. First, the bonus percentage is a larger number than its
equivalent unit cost discount, so seeing a 50 percent might look
better than a 33 percent discount. Second, a price cut might make
a consumer feel there is a reduction in quality.
The math of discounts vs bonus quantities
If you get a discount of x percent, this is something easy to grasp
mentally. Whatever you were paying before, you will now pay x
percent less. The difference between the old unit price and the
new one is x percent. This is linear so it is easy to understand.
What is the effect on the unit cost of a bonus quantity? If the
product gives y percent more, then the new unit price will be
decreased by a factor of 1/(1 + y). The difference between old
and new unit prices will be a factor of y/(1 + y). This discount
factor is not as easy to grasp mentally.
In summary, discounts reduce the unit price linearly whereas
bonus quantities reduce it by a fractional ratio.
As a rule of thumb, bonus quantities do not have as much impact
as their size would indicate. Here’s a graph that compares
percent discounts to bonus quantities.

A mental math method to compare discounts


You could use your calculator and figure out the unit-cost. This
is a surefire way to see how a discount compares to a bonus
quantity.
But I thought of an alternate method to reframe the problem in
terms of quantities that give comparable savings. Here are a
couple of examples.
Example 1: 33 percent discount vs 50 percent bonus
Say a product is $1 for 100 grams. If you get a 33 percent
discount, you are saving 33 cents per item. That means when you
buy 300 grams, you will save $0.99.
On the other hand, compare that to a 50 percent bonus. This
means for $1 you will get 150 grams. Hence if you buy 300
grams, you only have to spend $2—a savings of $1 over the old
price.
Thus the discount and the bonus offer approximately the same
unit price savings.
Example: 20 percent discount vs 33 percent bonus
The bonus means you spend $1 to get 133 grams. To get to 400
grams, you have to buy 3 products instead of 4 before the sale.
So here you are saving $1 on 400 grams.
With the discount, you are saving 20 cents per item. So when
you buy 4 items to get to 400 grams, you will save $0.80.
Comparing the savings, the bonus quantity is the better deal.
General technique
If you are offered a bonus of y percent, then find out the cost and
savings for 1/y units. Compare that to how much the discount
would save for the same quantity of product.
When in doubt, see if the store presents the unit cost, or go ahead
and use your phone calculator to get the unit cost. There’s no
shame in doing the math to overcome the tendency to favor
larger bonus quantities.
Note
[1] Chen, Haipeng, et al. “When More is Less: The Impact of
Base Value Neglect on Consumer Preferences for Bonus Packs
Over Price Discounts.” Journal of Marketing 76.4 (2012): 64-77.
Sounds Misleading
The illusion: The sounds we hear can influence our perception
of size.
The rational response: Focus on the absolute values involved.
Be aware that sound can influence decision-making.
Examples
Phonetic symbolism is the idea that sounds can influence our
perception of size. We associate larger sizes to back vowels (like
the “o” in two or the “u” in put) as well as stop consonants
(which include the letters p, k, t, b, g, d, and the hard “c” in cut).
Similarly we associate smaller sizes to front vowels (like the
“ee” in three or the “i” in six) as well as fricative consonants
(like s, f, v, and z).
Here’s a quick quiz. Which sale offers a larger discount: a
reduction from $10.00 to $7.66, or a reduction from $10.00 to
$7.22? In one study people were shown many sale price
discounts and asked to say the sale prices out loud [1]. Later they
were asked to give their perception of the sale in terms of a
percentage discount. The discount to $7.22 was perceived as a
smaller discount than $7.66. A possible explanation is the words
used for the numbers. The word twenty-two uses a back vowel
(“o” in two) and a stop consonant (“t” in twenty) so the size was
perceived as larger than that for the word sixty-six, which
includes a front vowel (“i” in six) and a fricative consonant (“s”
in six).
Interestingly the study recruited bilingual participants who could
speak Chinese. When the same sale prices were rehearsed in
Chinese, which has the opposite phonetic symbolism for those
numbers, the effect was reversed: the discount to $7.22 was
perceived as larger.
Research indicates there may be other ways that sound can
influence our spending decisions. One study suggests we enjoy
brand names that involve repetitive sounds (like Coca-Cola or
Jelly Belly) [2]. Another found that people might be able to
make more accurate decisions under a time constraint when
listening to faster paced music [3].
Pay more attention to the impact of sounds when you make a
decision, and then focus on the details which matter: like how
much money you are paying, the absolute value of the discount,
and the value you will get.
When we need to concentrate, most of us will silence our cell
phones and turn down the music to avoid distracting sounds and
interruptions. While we cannot block out sounds all the time, we
should be more aware that sounds can influence our decisions.
Notes
[1] Coulter, Keith S., and Robin A. Coulter. “Small Sounds, Big
Deals: Phonetic Symbolism Effects in Pricing.” Journal of
Consumer Research 37.2 (2010): 315-328.
http://www.ejcr.org/Curations-
PDFs/Curations3/Coulter_Coulter.pdf
[2] Argo, Jennifer J., Monica Popa, and Malcolm C. Smith. “The
Sound of Brands.” Journal of Marketing 74.4 (2010): 97-109.
[3] Day, Rong-Fuh, et al. “Effects of Music Tempo and Task
Difficulty on Multi-Attribute Decision-making: An Eye-
Tracking Approach.” Computers in Human Behavior 25.1
(2009): 130-143.
Endowment Effect
The illusion: We place more value on an item we own—that we
are “endowed” with—than on an item we have to buy.
The rational response: Value the item as if you were a
dispassionate buyer. Check out listings for the item or
comparable items and be realistic about its value.
Examples
Years ago I was shopping for a computer monitor. There were
listings for $175 used. I thought the price was not great. I
grabbed a new monitor for only a bit more at $250.
A few months later I was gifted a better monitor and my old one
became redundant. What to do? My friend suggested I sell it on
Craigslist for $175.
I thought: no way! My monitor was worth at least $200. Why
would I sell it for less?
Then I realized the strange inconsistency. As a buyer I thought
$175 was too much for a used monitor. But as a seller I thought
$175 was too little. Suddenly I was placing a higher value on
used monitors as I changed roles from buyer to seller.
This is the endowment effect: we often place more value on
items we own and possess than on items that we have to
purchase.
For example, one experiment asked participants to evaluate the
value of a mug compared to pens [1]. Half of the people
imagined setting a price for a mug in terms of pens. In the other
half people were first given a mug so they “owned it,” and then
asked how many pens they would need to trade for (sell) a mug.
These participants required twice as many pens in compensation!
It was the same mug and pens. Evidently a person who was
randomly assigned a mug was less willing to part with it.
Companies know people will pay more for products they feel
attached to. Clothing stores encourage you to try on clothes since
it might increase your willingness to pay. Car dealerships let you
test drive models, perhaps even letting you take a car home for a
day. Products on TV often advertise you can try them “risk-free”
for a month and get a 100 percent refund if you are not satisfied.
Once people start using the product they value it more, and
consequently they are less likely to return it.
If you want to de-clutter your home, be honest about the value of
the items. You can have a sentimental value for family
heirlooms, but for items you want to sell, consider what someone
would actually pay for them.
As for shopping, be careful about browsing at the tech store or
pressing the “try me” button on toys as the interaction might
create an attachment that will make you more likely to buy item.
Note
[1] Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler.
“Experimental Tests of the Endowment Effect and the Coase
Theorem.” Journal of Political Economy (1990): 1325-1348.
Loss Aversion
The illusion: We feel worse about losing $1 than we would
about gaining $1. In other words, we try harder to avoid a loss
than to seek a gain.
The rational response: A $1 gain affects the bottom line
similarly to a $1 loss. Build a buffer so you can sustain losses
and take chances to acquire gains.
Examples
Loss aversion is related to the endowment effect: we place a
higher value on an item we own because we feel worse about
losing the item than about gaining its monetary value. We should
value the loss and gain equally, but we tend to emphasize the
value of a loss.
Loss aversion may play a role in why most Americans love to
get a tax refund on income taxes. It feels great to get a tax
refund, but it does not mean you have saved money. The amount
you owe for taxes is the same whether you get a refund or have
taxes due. A tax refund means you have overpaid taxes through
the year and now you are getting your money back.
In short, a tax refund means you have (1) lost out on money that
could have earned interest, (2) lost cash flow on each paycheck,
and (3) provided the government an interest-free loan.
Mathematically it is better to pay taxes accurately. You do not
want to owe taxes every year, as you might incur a penalty, and
you do not want a large refund either.
Why do people prefer refunds? Loss aversion provides one
explanation. If you owe taxes, it feels like a loss since you are
paying money. Most people would rather avoid this perceived
loss. They would rather have a refund, which feels like a gain.
Tax preparers advertise to help people get refunds, since it is
better to deliver the “good news” of a refund rather than the “bad
news” that taxes are owed.
(There are exceptions when getting a tax refund makes sense,
because there are many ways to accumulate a tax refund. The
above refers to a common case that people excessively withhold
their earnings from a paycheck.)
Loss aversion may also impact product placement in stores.
While I was shopping at a big box retailer, I found batteries
placed at the end of a cereal aisle. In fact, I found batteries were
placed almost everywhere in the store: at the checkout aisles,
next to kids toys, in the electronics section, in the auto section, in
office supplies, at the end of razors aisle, and at the end of soap
bars aisle.
The placement in electronics makes sense. But what do batteries
have to do with cereal or soap bars?
I suspect loss aversion plays a role. When I see a display for
batteries, I wonder, “Hey, don’t I need to change my smoke
alarm battery soon? I don’t want to hear that annoying chirping
sound.” I’m not motivated by the gain of buying the battery. I’m
motivated by the loss and regret if I don’t buy it.
You can limit loss aversion by looking at the big picture and
making sure not to be overly sensitive to the potential for a loss.
Zero Risk Bias
The illusion: We favor choices that reduce a negligible risk to
zero over choices that would have a large risk reduction to a non-
zero risk level.
The rational response: Do not overpay to reduce the risk to
zero. Consider the absolute level of risk reduction and the cost of
the risk reduction.
Examples
We tend to favor certain outcomes. In one study, people were
asked how much they would pay to reduce risk for a pesticide
that would cause 15 adverse reactions in 10,000 containers [1].
People were willing to pay $1.04 to reduce the risk from 15
reactions to 10. What if instead the pesticide caused 5 adverse
reactions in 10,000 containers? In this case, people were willing
to pay $2.41 to reduce the risk from 5 reactions to 0. The
absolute number of cases reduced per 10,000 is the same in both
cases. But people were willing to pay more than double the
amount when the end goal promised zero reactions (entailed zero
risk).
The bias is we prefer reducing a risk to zero versus reducing a
risk by an equal, or perhaps larger, amount to a lower risk level.
Once my friend was eating a bacon cheeseburger with fries and
having a beer. I asked if he wanted ketchup. He said no thanks
because, while he was not dieting or trying to lose weight, he
was avoiding high fructose corn syrup entirely since it was
unhealthy. This is like the zero risk bias: my friend put more
effort to remove ketchup from his diet entirely rather than to
reduce the overall amount of unhealthy foods in his diet.
A math riddle (optional)
This problem also appears in my book Math Puzzles Volume 3.
I want to give a mathematical example that illustrates the zero
risk bias. In fact, we can prove the choice most people prefer
intuitively is the wrong choice according to rational agent theory.
The example involves some assumptions about utility and is a
proof based on logical deductions.
The mathematical derivation can be safely skipped without loss
of continuity for the rest of the book. However, I want to present
the problem to give a flavor of rational agent theory. You might
feel uncomfortable about some of the assumptions or the
conclusion. And that is natural: rational agent theory is not
perfect, but it does illustrate how one should approach a problem
under specified assumptions. Occasionally we learn something
novel and counter-intuitive.
With those caveats, here is the problem.
In Russian roulette, a revolver is loaded with one or more
bullets. A turn involves spinning the cylinder to randomize the
location of the bullet(s), at which point a player puts the gun to
his head and pulls the trigger. If the player is lucky to survive,
the game continues with the next player.
Consider two different versions of the game with identical six-
shooter guns.
Situation 1: You are playing the game with one bullet.
Situation 2: You are playing the game with four bullets.
In each game, you are given the option to pay money to remove
a single bullet. If your preferences are to be logically consistent,
should you pay more money to remove the bullet in situation 1,
situation 2, or would you pay the same amount?
Almost everyone (myself included) would pay more money in
situation 1 to remove the single bullet. The gut feeling is that it’s
worth more money to survive with certainty than to reduce the
odds of death in situation 2. This is the zero risk bias.
We can prove this is the wrong decision according to rational
choice theory. It is logically sensible to pay more to remove a
bullet in situation 2 if you prefer being alive to being dead and
prefer having more money to having less. This is known as
Zeckhauser’s paradox.
We will place a utility on each event and then evaluate the
situations as gambles. We can compare the value of each
situation just as we would value the expected results in a lottery.
Consider the events D = Dead and A = Alive. Also consider Lx as
being alive after paying x dollars and Ly as being alive after
paying y dollars.
If you pay x dollars to remove one bullet from six, then you are
saying the event of being alive after paying x dollars is equal to
the utility for the lottery of playing the game, in which there is a
1/6 chance of death and a 5/6 chance of being alive. (In Von
Neumann and Morgenstern utility theory, a rational agent is
indifferent between two lotteries with the same expected
outcome. So the value x you are willing to pay is the one that
makes you indifferent—any more and you are overpaying, any
less and you’d prefer to remove the risk.)
Therefore, with a utility function u, we have:
u(Lx) = (1/6) u(D) + (5/6) u(A)
Similarly, when you are willing to pay y dollars to remove a
bullet from four to three, that means you are indifferent in the
lotteries where you play the game with four bullets or pay to
play the game with three bullets. This means the following
equation:
(3/6) u(Ly) + (3/6) u(D) = (4/6) u(D) + (2/6) u(A)
We can simplify the above equation to get:
u(Ly) = (2/6) u(D) + (4/6) u(A)
If we take u(D) = 0, then since we prefer to be alive that means
u(A) > 0. So we have derived the following equations:
u(Lx) = (5/6) u(A)
u(Ly) = (4/6) u(A)
Let’s subtract the second equation from the first, and notice the
result is positive:
u(Lx) - u(Ly) = (1/6) u(A) > 0
In other words, you prefer to be alive after paying x dollars to
being alive after paying y dollars. But since you prefer to pay
less—since having more money is better—that must mean x is a
smaller amount of money than y!
Therefore, under the Von Neumann and Morgenstern utility
theory, you should be willing to pay more for situation 2 where
you remove one bullet from four.
The mathematical example should not be taken literally: I would
hope you never have to play Russian roulette and choose
between two wild options. The mathematical example is meant
to show how rational theory suggests why reducing risk to zero
may not be better than reducing risk to a lower level.
As I mentioned above, we often do allocate money and time to
reducing some risks to zero—like eliminating ketchup—when
we could have better results by considering total risk—like
reducing the amount of beer and bacon cheeseburgers.
Notes
[1] Viscusi, W. Kip, Wesley A. Magat, and Joel Huber. “An
Investigation of the Rationality of Consumer Valuations of
Multiple Health Risks.” The RAND Journal of Economics
(1987): 465-479.
[2] I read about Zeckhauser’s paradox in Ken Binmore’s
textbook Playing for Real: A Text on Game Theory.
3. Losing Track: Stay On Target
In the figure below, if the top left line is extended to the right,
which line segment would line up with it?

On first glance it appears 2 is the correct answer. But use a


straight edge and try it out: the correct answer is 3.
Normally we have no problem following a straight line. But in
this figure there is a gap that shifts our perception upward. We
have trouble keeping track of how the line extends and we miss
the pattern.
Analogously, companies can trick our perception of cost by
introducing steps that make us lose track of where our money is
going.
This chapter covers cognitive biases in which we spend
excessively, or save too little, because we fail to account
accurately.
In the optical illusion we can remedy our perception of the line
by taking the extra step of measuring with a straight edge.
Similarly, we can overcome the related cognitive biases by
measuring our habits and being more conscious about spending
habits.
Untracked Spending
The illusion: We overestimate our ability to track spending and
saving habits.
The rational response: Periodically analyze your account
balances to see if you are staying on track. You might consider
tracking your expenses and income to see how much you are
actually saving and spending in a year.
Examples
When people ask me how they can improve their spending
habits, I start by asking them some questions. How much did you
spend on gifts last year? What percentage of your income do you
spend on gas? Most people hesitate and admit that they are not
exactly sure. Occasionally, a person gives me a defensive answer
that the questions do not matter because “I know where my
money is going.”� I can’t fully dispute that claim, but I am
skeptical since many people want to save more. If you cannot
quantify your discretionary expenses, do you really know where
your money is going?
Without careful accounting, we cannot be entirely sure where
our money is going. In fact, one study found people tend to
underestimate how much they will spend at a future time, which
suggests we tend to spend more than we expect and we do not
save enough for the future [1].
A basic remedy is to check your account balances every week or
every month. You can see how your credit cards, checking
account, savings accounts, and investments are doing.
The downside to this method is cash flows are complicated.
Many people save for retirement and get raises at work while
also taking out loans on their home or having balances on their
credit card. Is your net worth increasing or decreasing? Can you
handle an unexpected health emergency, and how will your
finances be after that?
To be more careful, I also keep track of my income and spending
every day in addition to checking my accounts regularly.
There are several benefits to tracking expenses and keeping a
budget. First I have a record of where my money is going and
coming from. Second I can analyze the data to see if I’m
overspending or how I might save. And third, I can budget for
savings and see if I am on track.
Many people use online websites like Mint.com which will
automatically tabulate spending from linked accounts. You can
also write down your expenses in a spreadsheet. I use a
spreadsheet, and you can get a basic version of my template for
free on my website
(http://mindyourdecisions.com/blog/financial-
tools/#expense_tracker).
Note
[1] Peetz, Johanna, and Roger Buehler. “Is There a Budget
Fallacy? The Role of Savings Goals in the Prediction of Personal
Spending.” Personality and Social Psychology Bulletin (2009).
Too Many Goals
The illusion: We overestimate our ability to save for many goals
at the same time.
The rational response: Consider saving for a specific goal or
just a few goals at a time.
Examples
College graduates often plan to pay off debt, save for a home,
contribute money for retirement, and save for travel, all at the
same time. Is this a good idea?
Research suggests that having a single savings goal is often
better than trying to accomplish multiple, competing goals [1].
The issue is that trying to save for multiple goals leads to a
deliberative rather than implemental mindset, which means
people end up thinking about saving money rather than actually
saving money.
One part of the study took place in over six-months in rural
India. The study recruited 83 households to meet with a financial
planner. Some households were told to save for multiple goals
(think about retirement and a child’s education) whereas others
were told to save only for the goal of a child’s education.
Additionally, some households were told to stash cash into an
envelope to make the saving even more concrete. The
households then were asked to track their expenses for a 6-
month period. The average savings rate increased in all
households, but saving for a single goal resulted in the highest
savings rate.
Here are the average saving rates for the different methods:
10% - Single goal, with envelope
8% - Single goal, no envelope
6% - Multiple goals, with envelope
5% - Multiple goals, no envelope
3% - No specific savings goal
People saved more when they had a goal, and they saved the
most when they had a single goal. Putting the savings in an
envelope also increased the savings rate.
The results suggest it can be smart to focus on a single goal to
encourage action. Then you can use those savings for multiple
goals, like retirement, a new car, health expenses, and so on.
Note
[1] Soman, Dilip, and Min Zhao. “The Fewer the Better: Number
of Goals and Savings Behavior.” Journal of Marketing Research
48.6 (2011): 944-957.
Commitment Device
The illusion: We succumb to temptation even when we plan to
avoid a bad habit.
The rational reaction: Use a commitment device to prevent or
increase the cost of a bad habit.
Examples
We learned about the marshmallow test in the opening chapter.
In the experiment, children were presented with a single
marshmallow. They could either eat the marshmallow
immediately or they could wait for 15 minutes and earn a reward
of a second marshmallow. Only about 30 percent of the children
did wait 15 minutes, and those that could delay gratification had
higher educational achievement and a lower body mass index 10
years later.
The marshmallow test suggests having willpower can result in
success and health later in life. But there is another aspect to the
story. Most of the children, even those that did wait, felt the
temptation to eat the single marshmallow. And most of the
children, even those who did not wait, felt that waiting for the
second marshmallow was the better choice. The children who
did ultimately wait were better at avoiding temptation and
focusing on the future reward.
In other words, we often know what is right, but we might lack
the willpower to do what is right. How can we prevent ourselves
from making mistakes in the future? One method is to use a
commitment device that forces us to take a specific action or
raises the cost of doing a bad action.
For example, consider how many people try to save money. They
set out a monthly budget and plan to save money to pay off debt
or invest for retirement. But during the month there are several
temptations to spend money on shopping deals, restaurants, and
so on. At the end of the month people often save less than they
initially wanted to.
A commitment device is one way to keep to the spending plan.
People who spend on credit cards excessively might want to
force themselves not to use credit cards at all. They could cancel
the cards and only use cash. Some people have been known to
freeze their credit cards in an ice block. This method increases
the cost to temptation: while someone could still use the credit
card, the person would have to wait for the ice block to melt,
thereby reducing the possibility of an impulse purchase.
Another budgeting commitment device is to make savings
automatic by directing a portion of a paycheck into a savings or
an investment account. This locks in a specific savings amount
before there is a chance to spend the entire paycheck.
Some people also use public declarations as commitment
devices. Let’s say a person wants to save for a new car, and they
need to be on a savings plan. One way to increase the motivation
is to tell friends and family about the plan. Now if the person
fails to stay on budget, there is the additional cost that other
people know it, and that might add extra motivation to avoid
temptation.
Many times we know the correct decision in advance, and we
can use a commitment device to encourage us to the right action,
whether it is a physical device (ice block for credit cards), a pre-
committed action (automatic savings), or a public commitment
(telling friends).
Flat Rate Bias
The illusion: We often opt for unlimited plans, or high cost
bundles, for a fixed cost when we would spend less for an a la
carte plan where we would pay for each use.
The rational response: Measure your usage and analyze if you
would be better paying per use.
Examples
Have you ever noticed how common unlimited plans have
become for cell phones (text messaging, voice minutes) as well
as entertainment services like music and video streaming? There
is a convenience in paying a fixed rate without fear of overages.
But there is another reason companies offer bundled plans: they
might end up earning more money than if people paid per use.
For example, think about the following situation based on cell
phone plans in 2009. Would you rather choose an unlimited
voice plan for $100, a plan with 1,000 minutes for $50, or a plan
costing 25 cents per minute?
If you are like most people, you would opt for the unlimited or
1,000 minute plan. If you talk a lot, these plans have a much
lower per minute cost. Plus, the plans offer a peace of mind: on
the pay per minute plan, you might feel stressed that each minute
you talk costs money.
But did people actually choose wisely? A survey in 2009 asked
people about their plans and their actual cell phone usage [1].
Surprisingly, many people were buying expensive plans but not
talking a lot of minutes. The group as a whole was paying an
astonishing $3 per minute of actual talk time! Clearly some
people could have saved money by opting for the plan of 25
cents per minute.
While we enjoy the preference for paying a flat rate, we should
analyze our usage and consider if a pay per use plan would be
cheaper.
It might help to realize that no plan is truly “unlimited.” For
instance, consider a phone company’s offer of “unlimited” calls
in a month. Does that really mean there is no limit to the number
of calls you can make? Of course not, because you have to face
the physical limit of the amount of time there is in a month. Even
if you were on the phone nonstop, there is a physical limit of 60
minutes in an hour and 24 hours in a day. It is impossible to talk
more than 1,440 minutes in a day. There are also practical
limitations in that you will not be on the phone every single
minute. Logically every “unlimited” plan has physical and
practical limitations. Avoid thinking about plans as “unlimited”:
think about your actual use when making a decision.
You can avoid the bias by being honest with yourself. You might
start out with a flat rate and see if your usage justifies a higher
unlimited rate. Or you can try an unlimited rate plan and then
calculate how much the plan costs per your actual usage.
Something similar to the flat rate bias occurs in restaurants and
hotel buffets. You can choose a flat rate, say $20, and eat an
“unlimited” amount from the buffet. Or you can order items off
the menu a la carte. Years ago I would often opt for the buffet.
But now I realize that I can usually get more food—and even
leftovers—if I order a la carte. I remind myself the buffet is not
really unlimited: it is limited by the amount I can eat. If a large
pizza costs $8—which would feed me for two meals—then a
buffet for $10 is not cheaper. Occasionally I opt for the buffet if I
want variety, or if I am in a group that cannot agree on the menu
choices. In those cases I pay for the value of the buffet
convenience knowing ordering a la carte would be cheaper for
me.
I consider the same idea when I’m with a group. If I’m with a
group of 5, for example, I would consider if we could get more
pizza with $10 x 5 = $50 from the buffet or by ordering $50
worth of pizza and sharing. Very often we fare better when
ordering items from the menu versus opting for the buffet.
Note
[1] David Lazarus. “Talk isn’t cheap? For cellphone users, not
talking is costly too.” 08 March 2009. Los Angeles Times.
http://articles.latimes.com/2009/mar/08/business/fi-lazarus8
Decoy Effect
The illusion: When we have trouble picking between two
choices, the presence of a third “decoy” choice can cause one of
the choices to look more attractive by comparison.
The rational response: Be on guard when a choice has 3
options. Try to evaluate each option on its absolute merit rather
than comparing the options relative to each other.
Examples
Which option sounds better to you?
(A) 5 star restaurant, 25 minute drive
(B) 3 star restaurant, 5 minute drive
Do you want a highly rated restaurant that is far away, or a less
well rated restaurant that is closer? In rational agent theory you
might prefer one choice or the other, you might be indifferent
between the two choices. How you feel about A and B should be
independent of the other choices offered.
The decoy effect says how people feel about A versus B can be
influenced by the presence of a third choice [1].
First, imagine the third choice is:
(C) 4 star restaurant, 35 minute drive
This restaurant is similar to (A) 5 star restaurant, 25 minute
drive, but C is not as well rated, and C is farther away. The
presence of option C makes option A look better by comparison.
In a study in 1982, people shown this choice set had a preference
for option A over B.
The twist is the study also asked a group to evaluate a similar
choice. Options A and B were the same, but in the variation the
third option was:
(C’) 2 star restaurant, 15 minute drive
In this choice set, option C’ is not as well rated as (B) 3 star
restaurant, 5 minute drive, and C’ is farther away. So option C’
makes option B look better. Now the group showed a preference
for option B over option A.
In other words, how people evaluated A versus B changed
depending on the details of an inferior option C. Options A and
B were the same in both scenarios, and option C (or C’) was
never a good choice. So why would a third option change how
people feel about options A and B?
The third option is a decoy that makes one of the options look
more attractive by comparison. In a town of highly rated
restaurants, when C is a 4 star restaurant, you want to go to the
best rated restaurant of choice A. In a town of lower rated
restaurants, when C’ is a 2 star restaurant, you just want to go to
the closest restaurant of choice B.
The third option is worse in every way—it is asymmetrically
dominated—compared to some other option, and it increases
attention about a specific characteristic. In the first example the
third option makes you focus on a restaurant’s rating, leading to
choice A, and in the second the third option makes you focus on
the time to the restaurant, leading to choice B.
Companies can employ the decoy effect in subtle variations. For
example, imagine a restaurant only offered small and large drink
sizes. Let’s say the restaurant really wants people to buy the
large size to bring in revenue. The restaurant might introduce a
medium size that is close in price to the large size but
significantly smaller. The decoy of the overpriced medium at a
modest size could encourage people to focus on which drink
gives the best value (the most liquid for volume), and that might
lead to people buying the large size. If instead the restaurant
wants to encourage sales of the small size, it could make a
medium size overpriced and only a bit larger than the small. In
that case people might focus on spending less money and buy
more of the small size.
I came across a similar example when I was buying a snow
blower. The company offered a $400 model and a more powerful
$560 model. Most people would consider the tradeoff of price
and features and perhaps see if the $400 could work. The
company also offered a $720 model, which was the same as the
$560 but with an ergonomic chute system. This model did not
have many reviews and was less popular. Plus, the chute system
did not seem like it was worth a $160 upgrade. Why would the
company offer the expensive $720 model? The expensive model
possibly acted as a decoy to encourage people to focus on
features of the snow blowers, which would lead them to choose
the $560 model over the $400 one.
Once you recognize the decoy effect you can take measures to
avoid its influence. If you suspect the decoy effect pushes you to
option A, for example, then dream up another decoy that might
influence you to option B. You may realize that you don’t really
want option A after all. Alternately, rather than comparing the
choices to each other, you should evaluate each choice on its
absolute merits.
Note
[1] Joel Huber; John W Payne; Christopher Puto (1982).
“Adding Asymmetrically Dominated Alternatives: Violations of
Regularity and the Similarity Hypothesis.” Journal of Consumer
Research, 9, issue 1, p. 90-98.
Pre-Order Bias
The illusion: We are willing to overpay for unreleased products
because we deem a high price as signaling high quality.
The rational response: If you can wait, many products drop in
price over time.
Examples
People love to wait in line for the newest iPhone, and they might
even camp out for a movie release. Queuing and pre-ordering
does make sense when a product sells out. But many technology
products are often available soon after the rush, with initial bugs
fixed too. The opening night lines of movies fade away within a
few weeks, and within months movies become available for
home rental.
Why is there a rush for products that will be in good supply very
soon and likely at a lower price?
One paper suggests there is a psychological reason for how we
view unreleased products [1]. A series of four experiments
suggested that people evaluate the price of a product differently
depending on the immediacy of the product. A high price for a
product on the market can be a bad sign, as people will wait for
the price to drop. A high price for an unreleased product is
instead perceived as signaling high quality, and the early
impression persists even as the product gets into the market.
There are times you might want to be the first to have an item. If
you publish movie reviews, you would want to see it in advance
of everyone else. Or if a new technology can help increase
business sales, then a high cost pre-order may be worth it. But
other times it can be worth waiting until the price drops.
To avoid the pre-order bias, imagine the product was sitting on
the shelf of a local store. Would you pick it up, or would you
wait for it to go on sale, as most products predictably do?
While there are times pre-ordering makes sense, many people
probably buy unreleased products too frequently. I personally do
not mind letting others try out new products and then benefiting
from their reviews and experiences. Very often they are not as
happy as they expected and I can avoid buying a lot of mediocre
products.
Note
[1] Bornemann, Torsten, and Christian Homburg. “Psychological
Distance and the Dual Role of Price.” Journal of Consumer
Research 38.3 (2011): 490-504.
Present Bias
The illusion: We overvalue immediate rewards compared to
future rewards that would be worth the wait.
The rational response: Have a rule of thumb to measure the
value of time. Analyze whether you could have done better by
being patient.
Examples
Would you rather have $80 a year from now, or $100 a year and
a month from now?
Now how about this: would you rather have $80 now, or $100 a
month from now?
In the first question, most people would probably pick waiting a
year and a month for $100. They can see that waiting one month
for an extra $20 is a good tradeoff.
In the second question, however, people might change their
answer to take the $80 now. There is a temptation to get money
immediately that outweighs the value of waiting one month for
$20.
These preferences are not logically consistent. To see why,
imagine you answered you would wait 1 year and a month for
$100. After one year, you are then offered the choice to change
your mind: do you really want to wait 1 month for $100, or
would you rather have $80 now? This is the situation from the
second question. If you were willing to wait for the $100, you
should still be willing to wait, even though the waiting time for
both options has been reduced by a year.
What’s the proper way to evaluate tradeoffs of time and money?
A starting point is to use the discounting model from rational
agent theory. The formula can be written in two ways:

Future value = (Present value)(1 + r)t

Present value = (Future value)/(1 + r)t

Where r = interest rate and t = time


(One caveat: make sure the interest rate and the time variable are
in the same units of time. If the interest rate is an annual rate (12
months), then you would want to put the time value in terms of
years. Or, if the time is written in months, then you would want
to convert the interest rate from an annual rate to a monthly rate.)
Let’s do an example of comparing future values. Suppose you
can earn 5% annually in a savings account and no other
investment options are available. Would you rather have $95
now, or $110 a year from now? Notice that if you got $95 now,
you could put that into a savings account and earn 5%. The
future value is therefore $95(1 + 5%) = $99.75 a year from now.
You would earn more by taking $110 a year from now.
An alternate method is to compare the present value of each
option. What is the present value of getting $110 a year from
now? The future value of $110 needs to be discounted to the
present day. This is done by dividing by the discount factor,
which is the 1 plus the interest rate, or 1 + 5% = 1.05. The
present value is computed as $110/1.05 = $104.76. Because this
amount is a larger than the other option of $95, the conclusion is
waiting 1 year for $110 is a better choice.
The method essentially provides a systematic way to evaluate
money and time and compare future rewards and present rewards
logically. If we were given the choice of $105 now or $110 later,
for example, we would immediately reply we would take the
$105 because that is larger than the present value $104.76 of
waiting for $110 a year from now.
For a final question, would you rather have $100 now, or $110
two years from now, given an interest rate of 5 percent? We can
calculate the future value of $100 as $100(1 + 5%)2 = $110.25.
The exponent of 2 refers to the time period of 2 years. The future
value of $100 is larger than $110, so the correct choice is taking
$100 now.
Most financial products and investments are analyzed using a
discounting model. These calculations are more complicated in
reality because many variables have to be estimated, cash flows
can occur in multiple time periods, and there is risk the numbers
can change. But the basic model presented here underlies the
price of many financial products.
When people make decisions, however, they do not tend to
discount according to the rational agent model. Instead, they tend
to overvalue immediate rewards and undervalue future rewards,
which some call hyperbolic discounting. We are biased for the
present time period, and in many cases we may be better off
waiting.
To overcome present bias, you can either properly account for
present values, or you can force yourself to think about the value
of money at a later time. You may realize that patience can pay
off.
For example, if you are faced with the choice of $80 now or
$100 a month from now, you can consider the present value of
the $100. Alternately, you could reduce the effect by imagining
the rewards are at a later date: would you wait 1 year for $80 or
wait 1 year and a month for $100? Either way you will be in a
better position to evaluate the tradeoff of time and money.
Opportunity Costs
The illusion: We emphasize the direct costs of a decision while
overlooking the indirect costs of the decision, or the value of
making another decision.
The rational response: Consider the true cost of a decision,
including the value of opportunity costs.
Examples
The opportunity cost of a decision is the value of the best
alternative decision—the value of what you could have spent the
money on instead. The opportunity cost includes the direct costs
and the indirect costs of a decision.
For example, suppose you want to splurge on a high-end TV.
The simple cost is the cash outlay of thousands of dollars. But
this does not reflect the true cost. The opportunity cost is what
you could have done with that money instead. You could have
gone to a lot of movies. You could have bought your friend a
gift. You could have invested in stocks. You could have paid off
loans. When you take these factors into account, buying a TV
may not be the right decision.
When you make a decision, you want to consider the alternatives
you are giving up. One mistake is to go for a low direct cost
decision that entails a high opportunity cost. For example, many
junk foods are tempting and they are inexpensive compared to
fruits and fresh vegetables. The junk food has a low direct cost,
but over years and years, a diet of junk food may lead to costly
health problems.
Another mistake is to avoid a high direct cost decision that
would pay off in indirect ways. Buying a low end appliance
offers immediate costs savings, but it might involve higher
operation and maintenance costs compared to an energy efficient
model. In some cases buying an appliance with a high sticker
price could save money due to lower energy bills.
In 2007, I remember analyzing the opportunity cost of a “free”
checking account. Many banks would waive a $10 monthly fee if
you held an account with $10,000. It sounded like a great deal
until you considered the opportunity cost. At the time, online
savings accounts were offering 5% interest rates. If you would
normally have held much less in the checking account, say
$4,000, then the extra $6,000 you would have deposited could
have earned $6,000 x 5% = $300 in interest per year, which
amounts to $25 per month. In other words, in order to save $10
monthly in checking account fees, you would be losing $25
monthly in interest! When you consider the opportunity cost,
you realize the “free” checking account is ultimately costlier.
You should always consider the direct and indirect costs of a
decision. In fact, many times the indirect costs relate to how you
spend your time, and that is the topic of the next section.
The Value Of Your Time
The illusion: We waste a lot of time to save money, or we spend
a lot of money to save only a little bit of time.
The rational response: Value your time accurately. Think about
the opportunity cost of your time.
Examples
Is free food worth it? Every year a fast food restaurant offers free
food on a specific day. And every year the promotion brings big
crowds, so people might wait hours for one free burrito or other
food item.
While free food is great, the people in line are forgetting the free
food costs time, which could have been used to work more,
exercise, or just relax. The people waiting in line may not be
accurately valuing their time.
On the other hand, people will often overvalue their time. Many
people forgo comparison shopping because they believe the extra
time does not justify the savings. In fact one study found that
cherry picking for sale items in groceries can bring relatively
high savings, equivalent to earning about $15 or $20 hourly,
which is more than many people earn at work [1]. In other
words, the savings often do justify the extra time cost of
comparison shopping.
How can you value your time accurately? While there is no
specific rule, there are a few methods that can help.
Method #1: your hourly billing rate (how clients value your
time)
Many service jobs like consulting and legal work bill out their
work on an hourly basis. It is important to know your hourly
billing rate to make better decisions for the company and be a
better employee. For instance, if you bill at $175 per hour, then it
is better to outsource a large copying job for $30 than spend an
hour at the copier.
Because these decisions help your company and not you directly,
you may ask why you should care. There are three main reasons
you ought to take note. First of all, your salary and bonus are
ultimately dependent on your company’s overall success—in
other words, a rising tide lifts all boats. Secondly, managers and
owners will appreciate your contribution and usually put you in
higher regard. And finally, the most direct reason: you can get
out of mindless tasks if you reason with good economics.
In my experience as an economic consultant, there were times
we needed data that was publicly available but not in a readily
usable format. Instead of spending our valuable time collecting
and preparing the data, it was quicker and cheaper to purchase
the data. Or here’s another example: our office had a coffee
machine and snacks paid for by the company. It was a nice perk,
and we appreciated it. Because we stayed in the office rather
than go outside, it may also have helped us bill more hours. So
the decision to stock snacks may have been a wise business
decision too, considering the value of our time.
In summary, knowing your billing rate can help you make better
decisions for your company (and indirectly you). But to get into
a more personal analysis, there is a second factor to consider.
Method #2: your hourly wage (how your company
compensates you for your time)
Your hourly wage is equal to your annual income divided by the
total time you spend at work. A rule of thumb is to divide your
annual compensation by 2,000, as many people work about
2,000 hours. A person making $50,000 annually is making about
$25 per hour.
Your hourly wage tells you what others are willing to pay for
your time. One place this is useful is in when you are looking for
a job. For instance, a job where you make $100,000 a year
working 80 hour weeks has the same hourly wage as one where
you make $50,000 a year working 40 hour weeks. In essence, the
two jobs pay you the same money for the first 40 hours you work
per week.
As for the remaining 40 hours, it is up to you and how well you
know your preferences to decide whether you want to double
your income or have more leisure time. This analysis is
obviously incomplete, because after all, the jobs will likely have
different career paths and there are tax considerations, but it is a
starting point.
While it is useful to know your hourly wage for your career, I am
less convinced that it can be used as the sole basis for decisions.
This is because you cannot simply trade your leisure time for
work on a salaried job, or put another way, you are not making
money every hour of the week.
The lesson is that the hourly wage is good for making career
decisions but not as good for making personal decisions. For
this, I primarily rely on the third method.
Method #3: your available leisure time (how much time you
have to use)
Time is limited. We do have 24 hours in the day, but most of that
is already taken up by fixed activities. Years ago, while working
an office job, I found that I spent my average day doing the
following things:
8 hours sleeping
8 hours working
1 hour exercising
1 hour commuting
2 hours eating
1 hour personal hygiene (throughout the day)
1 hour talking to family and friends on phone
1 hour cooking
As you can see, 23 of my hours were fixed and I had 1 hour to
choose what to do. Some days I had more time, like when I
combined talking on the phone with commuting, but on average,
I had 1 free hour.
I was not always smart about using that free time. I once decided
to take a finance class since I estimated it “wouldn’t take much
time.”� While the total time commitment was not a lot, I soon
found myself spending all of my leisure time on homework, and
I decided it was not an experience I wanted to repeat.
On the other hand, there have been stretches of my life where I
have had 4 hours of leisure time in a day. In those times I know it
is better to cook and comparison shop than to spend extra time
catching up on TV shows.
Note
[1] Fox, Edward J., and Stephen J. Hoch. “Cherry-Picking.”
Journal of Marketing 69.1 (2005): 46-62.
http://knowledge.wharton.upenn.edu/papers/1235.pdf
Salami Tactics
The illusion: We agree to a series of small concessions and
ultimately give up a large concession, which we would have
initially refused.
The rational response: You can use salami tactics to get what
you want. On the other side, avoid small concessions from the
start.
Examples
Salami tactics are a way to get a big concession by “slicing” it
into smaller concessions to which an opponent would agree.
The name perhaps comes from a hypothetical exchange in a deli.
If you ask for an entire salami for free, you will most likely be
told no. But let’s say you start by asking for a single slice of
salami. Most delis will give a free slice as a sample. You can
then ask for another slice, perhaps of a similar product, and they
will also hand that to you. Then you might ask for the first
product again, and they will probably give that again. Soon your
requests will be greeted with rejection, but by that point you
have already gotten several free slices—more than if you had
asked for all those slices at once.
Salami tactics work by getting the other side to agree to small,
trivial requests with the hidden agenda of getting them to agree
to a much larger goal.
You can use salami tactics to turn free food samples into a
sizable serving, to turn small favors from a friend into a larger
favor, or to turn small compensation concessions into a decent
raise.
Because salami tactics are effective, you must be on guard when
others use them. If the other side tries to work in small steps, that
could be a time for you to ask what their final goal is, or for you
to simply offer a reasonable final package as a take-it or leave-it
offer.
For example, think about how many investment advisors get new
clients. An advisor usually sends a mailer for a free restaurant
meal, which is a casual social setting with no pressure. After the
meal, an advisor asks if you are willing to hear about your
options. Many people agree and provide their contact
information. A week later the advisor follows up and offers a
free consultation with no commitments. Many people agree, and
at the appointment they are then sold on the benefits of an
investment product.
The process is not entirely sinister: people and advisors need
time to get to know each other and see if they can work together.
On the flip side, many people may work with an advisor they
dislike because they found it difficult to say “no” during any of
the steps.
To avoid getting caught up in salami tactics, you want to reject
small concessions from the beginning.
Change Blindness
The illusion: We overestimate our awareness of the
surroundings.
The rational response: Our attention is limited and we focus
only on specific details. We can be ignorant of huge changes,
and you should accept you will miss many details. Improve your
ability to focus on important details.
Examples
How much of a movie are you really paying attention to? The
first time you watch a movie, you are probably trying to follow
the story, hear the dialog and music, and understand what the
main characters are doing. You will most likely overlook the
many goofs and continuity errors that made it into the final cut.
The film Braveheart, for example, takes place in the 13th
century, long before the invention of cars. But during one the
battle scenes, there is a white van clearly visible in the lower left
hand corner of the screen [1]. Most people are paying so much
attention to the main story that such mistakes are not even
noticed. Many films have changes in lighting, actor and object
positions, and even costume changes that we might not notice on
a first viewing.
Change blindness is scary, but there are reasons we have this
bias. Change blindness is a consequence of us focusing on only
the most important details. If we truly absorbed every visual
detail of a movie, we would never enjoy any movie because we
would notice all the mistakes. Or if we paid complete attention
to how magicians perform illusions, we would never enjoy any
illusion because we would instantly realize how every trick is
done. In a way, change blindness allows the world to be a more
magical place.
This means, unfortunately, that change blindness is not a bias
that you can simply overcome with practice. While you can be
better at paying attention to some details, you cannot suddenly
become hypersensitive to all changes in the environment. Our
brain is overloaded with stimuli and our attention is limited. At a
minimum it is important to accept change blindness and ask for
more time to look over details.
From a financial standpoint, change blindness can be a
disadvantage. It means that companies can make significant
changes to terms and conditions and most people will be
oblivious. For example, a cell phone company can increase one
of its itemized charges by $1 and most people may not even
notice their bill has increased. Or a food company can shrink its
products and charge the same price without shoppers being
aware. Change blindness leaves us susceptible to agreeing to
unfavorable terms. Any time a company changes its terms or
introduces a new package, pay extra close attention to the
differences.
Notes
[1] The website IMDB.com is one place to find goofs for a film.
The book The Invisible Gorilla by Christopher Chabris and
Daniel Simons discusses many examples of change blindness.
4. Uncertain About Uncertainty:
Calculate The Probability
Most of the examples have concerned decision-making with
certainty about the options and results. But when you invest in
stocks or buy life insurance, the best option depends on an
uncertain future. To make the right decision you need to know
the odds and make the most favorable gamble.
This is easier said than done. We often get the odds wrong
because probability and statistics are not always intuitive, as
exemplified by the famous Monty Hall Problem.
Imagine you’re on a game show, and you have to select one of
the doors 1, 2, or 3. There is a car behind one of the doors and
nothing behind the other two. You select door 1, and then I, as
the host, give you a chance to change your mind.
Proposal 1: I see you have chosen door 1. I know the car is not
behind door 2, so I am going to open door 2 for you right now
and show you it’s nothing. Does that change your mind in any
way? Do you want to stay on door 1, or do you want to switch to
door 3?
It seems like the car should be equally likely to be behind the
two remaining doors 1 and 3, so you think that the choice does
not matter. If you stay on door 1 or switch to door 3 the
probability of winning would be 50 percent.
The instinctual answer turns out to be wrong. The game heavily
favors the choice of switching, which has two times the
probability of winning.
One way to see this is by considering a rephrased version of the
proposal with the salient details emphasized and elaborated.
Proposal 2: I see you chose door 1. Would you like to stay on
door 1, or would you like to switch to the other two doors?
You’ll win if either of the doors 2 or 3 has the grand prize of the
car. Give it some thought. To add suspense to the show, while
you make your decision, I will open a door that I know has
nothing (this is door 2). Then we’ll play a drum roll to open
either your door 1 or the other unopened door (which I know is
3) and see if you won.
In this version, it is easier to see that switching is the correct
choice. When you switch, you win for 2 of the doors; when you
stay, you win only for the initial door you picked. Switching
does have a 2/3 probability of winning compared to staying
which has a 1/3 probability.
The phrasing could be simplified even further as follows.
Proposal 3: Would you like to stay on door 1, or would you like
to switch to the other two doors? You’ll win if either of the doors
2 or 3 has the grand prize of the car.
This version of the Monty Hall Problem is not confusing at all.
Without the distracting detail of the host opening a door he
knows has nothing behind it, it is easy to see that switching is
better and has a probability of winning of 2/3.
The Monty Hall Problem illustrates how we are biased with
probability: we can be influenced by how a problem is presented
to us, and we can even be influenced by details that do not
matter.
In this chapter, I will go over some of the common mistakes
about probability and statistics.
Conjunction Fallacy
The illusion: We consider specific, representative events as
more probable than generic events.
The rational response: Think about the mathematical logic that
general events are likely to occur. Consider the events in
absolute terms by evaluating the frequency of likelihood.
Examples
Here is a classic thought experiment. See if you can figure it out.

Linda is 31 years old, single, outspoken, and very bright.


She majored in philosophy. As a student, she was deeply
concerned with issues of discrimination and social justice,
and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.


2. Linda is a bank teller and is active in the feminist
movement.
In 1983 Amos Tversky and Daniel Kahneman posed this
question in an experiment [1].
Instinctively, most people associate Linda’s background with
being active in the feminist movement. In the experiment, 85
percent chose that scenario 2 was more probable.
Mathematically this is incorrect. Scenario 1 only requires Linda
to be a bank teller (event A) while scenario 2 requires Linda to
be both a bank teller (event A) and active in the feminist
movement (event B).
Formally, the probability of events A and B occurring together is
always less than or equal to the probability of event A alone. In
other words, scenario 2 is a more specific version of scenario 1,
and therefore it must be less probable.
The Linda problem illustrates we are tempted to associate a
likely story with a higher probability.
The issue may be we are confusing the mathematical word
“probable” with the practical issue of the story being “probable.”
For example, let’s say a co-worker calls in sick. The next day
you ask what happened. Consider two situations: (1) the co-
worker says he is okay now, and (2) the co-worker explains he
had the flu but he is okay now. Which story is more probable?
Mathematically situation (1) is more probable because it is a
more general version of situation (2).
However, that’s not how we would judge the two stories. Many
times people who lie lack specifics, so we use corroborating
details as confirmation we are getting the truth.
That is, we are not simply judging whether situation (1) or
situation (2) is more mathematically probable, given the co-
worker said he was sick. We are also judging whether the co-
worker was actually sick given the stories of situations (1) and
(2).
It is easy to get confused by the probability of an event versus
whether the event is probable (believable). One way to get
around the conjunction fallacy is to think about absolute amounts
and frequencies.
De-bias: think about frequencies
In the Linda problem, 85 percent felt that Linda being a bank
teller and involved in the feminist movement was more probable
than Linda just being a bank teller. But do people really think the
probability of the specific event is larger than the general event?
An experiment posed the same situation in terms of frequency
[2]. People were told to imagine there are 100 people with the
description of Linda. Then they were asked to estimate the
frequency of the scenarios. Out of 100, how many would be
bank tellers? Out of 100, how many would be bank tellers and
active in the feminist movement?
In these experiments, almost everyone put a higher frequency on
an individual being a bank teller versus an individual being a
bank teller and involved in the feminist movement. Thinking in
absolute terms and frequencies cut through the conjunction
fallacy.
When you need to consider the mathematical probability of an
event, think about the frequencies of contingencies and rank
them accordingly. Your intuition about the frequencies of
scenarios should be a better guide to probability than your
intuition about which scenario feels more probable
Notes
[1] Tversky, Amos, and Daniel Kahneman. “Extensional Versus
Intuitive Reasoning: the Conjunction Fallacy in Probability
Judgment.” Psychological Review 90.4 (1983): 293.
[2] Hertwig, Ralph, and Gerd Gigerenzer. “The ‘Conjunction
Fallacy’ Revisited: How Intelligent Inferences Look Like
Reasoning Errors.” Journal of Behavioral Decision Making 12.4
(1999): 275-305. I learned about the de-biasing study from
Streetlights and Shadows: Searching for the Keys to Adaptive
Decision Making by Gary Klein.
Selection Bias
The illusion: A statistic may appear overly positive if the
sample is based on a group with specific qualities rather than a
random sample.
The rational response: Consider if the sample is random, and
consider what the typical result would be for a random sample.
Examples
In America, virtually every major automobile insurance
company advertises a savings of $500 over competitors. How is
it possible that company A saves $500 over other companies and
company B also saves $500 over other companies, including
company A?
The nuance is how the $500 savings are calculated [1]. An
insurance company advertises that drivers who switch to their
company voluntarily report that they save an average of $500.
Now many people switch insurance companies to save money, so
it is not surprising that drivers who switch would report savings.
Furthermore, the survey asked people to voluntarily report if
they saved money. It stands to reason that people who saved
money might be more willing to report their experience than
people who switched and ended up paying more.
The statistic of $500 savings is based on a self-selected sample:
it is based on new customers who likely switched to save money.
The statistic also excludes existing customers, who may very
well be paying high rates from yearly rate increases. In other
words, the statistic may not be representative of an average
customer for the insurance company.
Researchers worry about selection bias when collecting data for
experiments. They try to use random sampling methods so as not
to attract participants with a particular predisposition.
Statistics not from academic research are more likely to have
selection bias. For example, online reviews of products are more
likely to be from people with extreme experiences: people who
loved or hated the product. Or a town’s restaurant reviews are
more likely to come from its residents. If a town has many single
family homes, the restaurant reviews may reflect how family-
friendly the restaurant is, rather than whether the food is a good
value.
Whenever you see a statistic, consider how it was collected and
whether the sample is self-selected for specific qualities or
whether it is actually representative (or useful) for your
circumstances.
Note
[1] Bob Trebilock. “Car Insurance: Save Money by Switching?”
CBSNews, 4 May 2010. http://www.cbsnews.com/news/car-
insurance-save-money-by-switching/
Survivorship Bias
The illusion: This is an example of a selection bias. We
overestimate the typical result because the sample excludes
observations with poor results.
The rational response: Look for items that are excluded.
Consider the typical result.
Examples
Imagine you get a letter in the mail predicting which football
team will win the weekly game. The prediction turns out to be
correct, which you disregard as a lucky guess. Then a letter
comes each of the following 4 weeks, and surprisingly each of
the predictions is correct. You then get a letter offering the secret
of the prediction system for $1,000 so you can bet the remaining
games. The past predictions were accurate. Should you consider
the offer?
This perfect prediction scheme can be implemented as a mail
scam, and the person sending the letter needs no prognosticating
ability. In the first week, the scammer sends out 1,024 letters,
half predicting one team wins and half predicting another team
wins. For the second week, the scammer only sends letters to the
512 people who received a correct prediction in week one, and
the same scheme is used to send half of the group one prediction
and the other half the other prediction. In subsequent weeks, the
scammer sends 256, 128, and 64 letters, only to those who
received all correct predictions, and dividing each group of
letters in half for the next prediction. Then the scammer can send
letters to 32 people asking them to purchase a prediction system.
They have seen 5 weeks of correct predictions, so they might be
tempted to buy the false promise.
The mail scam illustrates the survivorship bias. People who
receive correct predictions over 5 weeks perceive the accuracy
based on their own experience. In reality, the scammer is
excluding bad results week by week to give a false sense of
accuracy.
The survivorship bias can affect many financial decisions. For
example, an investment company might claim its mutual funds
have a high rate of return over 10 years, higher than other
companies. What might be the problem with that statistic? The
investment company might only average its current mutual fund
offerings [1]. Over 10 years some funds would have performed
poorly and closed down. By averaging only the current
offerings, the investment company is inflating its returns. This is
akin to the mail scammer claiming a 100% accuracy to the 32
people who got all correct predictions. While the statistic is true,
it is also misleading because the scammer excluded the bad
predictions.
The survivorship bias exists in health claims as well. Imagine a
diet program that advertises an average of 20 pounds lost for
people completing a 26-week program. The average weight loss
may be inflated if many people dropped out during the 26 weeks
of the program. Suppose only 1 in 4 who started the program
completed it. The average 20 pounds lost reflects the efforts of
people who persisted, not the typical result which was to drop
out.
Note
[1] Ian McDonald. “Mutual-Fund Math Puts A Sheen on
Returns.” The Wall Street Journal, 23 July 2004.
http://www.wsj.com/articles/SB109054599613871951
Confirmation Bias
The illusion: To prove a rule is valid, people limit their search to
examples that obey the rule.
The rational response: Look for examples that contradict the
rule and account for a rule’s accuracy using a broader, perhaps
random sample.
Examples
Here is a test [1]. I am thinking of a rule about triplets of
numbers and your job is to figure out the rule. I will let you
know the sequence 2, 4, 6 follows my rule. You can ask me other
sequences of numbers and I will let you know if they follow the
rule. Then you have to tell me what the rule is. Want to play?
People will typically ask the sequences like the following:
10, 12, 14
5, 7, 9
The answer is yes to both triplets. Upon hearing the
confirmation, people often conclude the rule is that each
following number is 2 more than the previous one. Then they
will be surprised to hear their answer is wrong.
Where was the mistake? The error is the person jumped to the
conclusion about the rule, and then only asked about sequences
that already followed the rule. Since each affirmative response
confirmed their initially wrong suspicion, this mistake is known
as the confirmation bias.
To test a rule, you should try to falsify it by testing sequences
that disobey the rule. For example, you might ask for the
sequence 1, 2, 3 (yes), or the sequence 9, 7, 5 (no), or perhaps
the sequence 7, 9, 5 (no). In other words, you should test a
variety of sequence types to learn which sequences obey and do
not obey the rule.
The rule I had in mind is the three numbers have to be in
ascending order. A person might have learned this by asking
about sequences that were increasing by a factor that was more
or less than 2, other sequences in ascending order, sequences in
descending order, and sequences in other random orders. Instead,
most people focus on sequences that increase by 2 and they fall
for the confirmation bias.
Sports coverage has plenty of examples of the confirmation bias.
You will hear things like, “He’s in a contract year so he’s going
to play better,” or “When everyone says one team will win,
usually the other team wins.” If you ask people for evidence,
they will rattle off many examples that confirmed the suspicion.
But they will never remember the examples that disconfirmed
the rule, and perhaps there are so many contradictory examples
that the rule is wrong.
Confirmation bias is common in the business world too. For
example, it is commonly said that golf is great for your career,
and to support the notion, people will point out many CEOs play
golf and have made business deals on the golf course. The story
is misleading because there are many people who have never
played golf and have also achieved success.
Shoppers can fall victim to confirmation bias too. One of my
friends will only fill his car with gas from a single company,
even if it is inconvenient and more expensive. He got the idea
that mixing gas from different companies is bad for his engine.
He explains all of his cars have lasted a long time, so that
illustrates his method is valid. The problem is he is only using
behavior that confirms the rule. To test his rule, he would have to
fill up gas at many stations and then see if that leads to any car
trouble. Or he could survey people who did mix gas from
different companies, and he might learn their cars last for a very
long time as well.
The confirmation bias is an example of a self-selection bias, and
the root of the problem is the sampling method. The
confirmation bias can be avoided by considering a broad sample
and testing examples that disobey the rule. In other words, you
should draw conclusions based on an honest evaluation of
experiences, not just on a handful of experiences that were
positive.
Note
[1] This test is known as the Wason 2-4-6 task and originates
from an experiment of Peter Wason in 1960. Wason, Peter C.
“On the Failure to Eliminate Hypotheses in a Conceptual Task.”
Quarterly Journal of Experimental Psychology 12.3 (1960): 129-
140.
Coincidences
The illusion: People are often surprised when unlikely events
happen, saying things like, “What are the odds of that?”
The rational response: Unlikely events happen. Coincidences
happen. Do not be impressed by seemingly rare patterns that are
bound to happen by chance.
Examples
When I visited New Orleans, I was at a sandwich shop and a
friendly couple recommended what was best. After a brief chat,
it turned out they had just moved from where I was living. And
one person was co-workers with one of my good family friends.
We delighted at how small the world is. But secretly I was not
truly impressed. The fact is coincidences are bound to happen.
For example, in Germany, the lottery did the seemingly
unthinkable: on June 21, 1995 the six winning numbers were
exactly the same as the numbers that were drawn on December
20, 1986. People were stunned and some probably thought there
was a conspiracy or a secret pattern to lottery numbers.
In fact, it was actually not very surprising. The chance that some
winning lottery number would have been repeated during 9 years
was about 28 percent, a fairly high probability [1]. This is higher
than the odds of the pedestrian feat to predict two coin flips in a
row.
It may be harmless to enjoy coincidences, but avoid being
exploited by them. Salespeople often turn out to have a common
connection with you: they know one of your relatives, they grew
up near you, or they have the same taste in music or fashion. The
fact is when they compare everyone they know against everyone
you know, there are bound to be some common connections. You
can appreciate the serendipity, but remember that is not a reason
enough you should buy from that salesperson versus someone
without a connection, for example.
[1] Mathematical calculation of lottery (optional)
The German Lotto 6/49 had the following rules: there were 6
numbers picked from the numbers 1 to 49 (with no repeats).
Between the two dates in question, there were 3,016 drawings.
What’s the chance that some winning number is repeated in two
or more drawings?
First, let’s calculate the total number of possibilities in a single
drawing. We need to pick 6 numbers from the possible 49. That
means there are “49 choose 6” = C(49, 6) = 13,983,816 ways to
pick a lottery ticket.
Now let’s imagine the lottery is repeated and calculate the
probability the new number is different from any previous
drawing.
The first drawing could be any sequence of numbers. The second
drawing has to be a sequence that is different, so there is one less
possibility. The probability of the second drawing being different
is:
(c - 1)/c
Where c = 13,983,816 is total set of possible numbers. For the
third drawing to be different, it has to be chosen from the set of
all draws except the 2 already used, so the probability it is
different is:
(c - 2)/c
More generally, the chance that the (n + 1) drawing is a unique
set of numbers is:
(c - n)/c
In the German lottery, there were 3,016 drawings over a 9 year
period. The chance that every single drawing was different is the
product of the probabilities that each subsequent drawing was
unique. This is:
[(c - 1)/c] × [(c - 2)/c] × … [(c - 3015)/c] = 0.722…
The probability that some number would be repeated is the
complement event, which is 1 minus this probability. So the
chance some sequence was repeated is 1 - 0.722… ≈ 28 percent.
Over time it becomes more and more likely that some set of
winning numbers will be repeated.
The lesson is that individual events can seem surprising, but over
time extraordinary things are bound to happen. We can find
delight in these chance occurrences, but we should realize they
are not as surprising as we might think.
Notes
I read about the German lottery in Leonard Mlodinow’s book
The Drunkard’s Walk. I calculated the German lottery repeating
chance using the method presented on the website
wizardofodds.com (http://wizardofodds.com/ask-the-
wizard/lottery/).
False Positive Bias
The illusion: We overlook that accurate tests may indicate
positive results falsely, if the test is looking for a rare condition.
The rational response: Consider the frequency of positive
results and see if a positive test really has power.
Examples
This section also appears in my book 40 Paradoxes in Logic,
Probability, and Game Theory.
Suppose a rare disease afflicts 1 percent of the population, and a
test is 99 percent accurate. Fearing an outbreak, public health
officials mandate testing for everyone in a town of 100,000. If a
person tests positive, what is the chance the person actually has
the disease?
Because the test is 99 percent accurate, it might seem every
result should be 99 percent accurate too. In fact, the results are
expected to be quite inaccurate: someone who tests positive will
have a 50 percent chance of not having the disease.
The reason is the disease has a low prevalence. Even though the
test will accurately identify people who have the disease, it will
inadvertently misdiagnose healthy people as false positives.
Let’s see why by considering the expected frequencies of the
results. Of the 100,000 people in the town, the disease affects 1
percent, or 1,000 people. The test, which is 99 percent accurate,
will identify 990 of the group as positive.
What about the healthy people? Of the 99,000 healthy people,
the test will be 99 percent accurate. That means the test will
identify 98,010 of the group as healthy. That leaves 1 percent, or
990 cases, that will be wrongly identified as false positives.
To summarize, the test will yield 1,980 positives for which 990
are correctly identified and 990 are false positives.
So what does that mean? A large portion of people who tested
positive—50 percent (990/1,980)—were false positives.
While the test is 99 percent accurate, someone who tests positive
only has a 50/50 chance of having the disease. The result is a
consequence of the disease being rare. While the test can
correctly identify people who have the disease, it will also
misidentify many healthy people as false positives.
The problem illustrates why it might be necessary to run a test,
even an accurate one, a second time. Using the same numbers
above, if the test were run a second time, the 50 percent false
positive rate would drop to just 1 percent. (Of the 990 false
positives, only 10 would test as false positives a second time. In
contrast, 980 of the 990 correct positives would test as positive a
second time. So only 10 of the 990 people who tested positive
two times would be false positives. This is a 1 percent rate of
false positives. This level of accuracy is more in line with the
stated 99 percent accuracy rate.)
Second, the problem suggests why mandatory screening could be
a bad idea. For instance, the federal government could require
that every couple get tested for various diseases before marriage.
But given the millions of marriage every year, this blanket policy
would probably result in an overwhelming amount of false
positives, potentially ending many relationships unnecessarily.
What’s the alternative to testing everyone? Doctors are cautious
not to run a barrage of tests on everyone. They take medical
histories and look for symptoms. They often administer tests to
confirm their suspicion, not to screen for a problem.
Disagreeing Experts
The illusion: When two credible sources disagree about an
issue, we tend to think both sides of the issue are equally valid
viewpoints.
The rational response: You can use the new information to
update your previous belief. One side of the issue might still
have more validity. Calculate the conditional probability (use
Bayes Rule).
Examples
One doctor tells you to eat whole wheat bread and vegetarian
meals; another tells you to avoid carbs and eat meats and
cheeses. One roofer suggests you completely tear down your
roof; another tells you to do a less expensive overlay. One
website explains that Apple has the greatest tech products;
another suggests you should opt for competitors.
Who is right, and how can you handle conflicting information?
In many cases, a check on credibility leads to better decisions.
You should discount opinions from financially-motivated
sources, nameless reviewers, gossip sites, and chain emails
against claims from unbiased research, popular reviewers with
reputations, credible news agencies, and trusted friends.
But weeding out bad information only goes so far. Many times
two credible experts will look at the same evidence and data and
offer diametrically opposed opinions. What then?
The wrong answer is to conclude both sides are equally valid.
This is a particularly common error in American media that
strives to offer “balanced” coverage.
A better method is to update your beliefs according to the
statistical formula known as Bayes rule, which I will explain
with a math problem.
An example of weather
Let’s say you are curious about the chance it will rain on Sunday
[1]. Historically there is a 75 percent chance of rain. This year
you check the forecast and two very reliable stations disagree.
Station one, which is right in 9 out of 10 forecasts, says that it
will rain. Station two, which is right in 11 out of 12 forecasts,
independently predicts that it will not rain.
What is the probability that it will rain on Sunday?
Calculating the probability
Many, many people interpret conflicting opinions to mean both
viewpoints of an issue are equally likely to be correct. In this
example, we will show this is not true: it is more likely to rain,
even though two credible weather forecasts disagree.
The proper approach is to use Bayes rule, which is a way to
update your beliefs with new information. The idea is the
following. Before you heard any weather forecast, your best
guess that it would rain was the historical average of 75 percent.
Now if the two stations forecast rain, that extra information
would be valuable, and it would strengthen your belief that it
would rain. Or similarly, if they both forecast that it would not
rain, that would decrease your belief that it would rain. The
amount that your belief should be adjusted can be calculated
using Bayes rule. Since you are updating your belief, this
process is called Bayesian updating.
Here is how Bayes rule works. Let’s say you are curious about
the chance event A happens. Your best guess is the probability,
Pr(A). Now let’s say that event B happens and that influences the
likelihood that A will occur. What’s your best guess now? You
want to know the probability of event A given that event B
happened. Bayes rule states you can calculate Pr(A|B) by the
formula Pr(A and B)/Pr(B), which also equals
Pr(B|A)Pr(A)/Pr(B). You are basically limiting yourself to a
universe where you know event B happens, of which you count
how many times event A will happen as well.
Instead of using the formula directly, we’ll do an equivalent
thought experiment. We’ll draw out the information using
probability branches. We will draw out a path corresponding to
each state of information. We will also label each branch with
the probability it occurs. Finally, we multiply the numbers along
the branches to calculate the chance of each event.
Here’s how it works. We start out drawing two branches for the
events that it will rain or not rain. Historically we know there is a
75 percent chance of rain and a 25 percent chance it will not
rain. So here’s how the tree looks in the first stage.

Next we draw branches for the forecasts for each station. Station
one said it will rain. Since station one is correct 9/10 times, it
will forecast rain correctly 9/10 times if it will rain. However it
will be wrong 1/10 times and also forecast rain when it will not
rain.

The answer is not close to 50 percent!


Now we will draw similar branches for station two, which 1/12
times is wrong—and misses a rain forecast—and 11/12 times is
correct in predicting it will not rain.
We will also multiply the numbers on the branches to get the
frequency of the possible events.

Now we can answer our original question. Once we hear these


forecasts, what’s the chance it will actually rain?
From our tree, we first tabulate how often these conflicting
forecasts occur. The answer is about 7.9 percent, comprised of a
5.6 percent chance for when it rains, and a 2.3 percent chance for
when it won’t rain.
What is the chance it will rain, based on hearing these forecasts?
Out of the 7.9 percent of cases, there is a 71 percent chance
station one is correct that it will rain (found by 5.6/7.9), and
there is only a 29 percent chance two correct that it will not rain
(found as 2.3/7.9). In other words, 71 percent of the time we are
on the left branch—that it will rain—and only 29 percent of the
time we are on the right branch.
Before the forecasts we thought there was a 75 percent chance of
rain. After hearing the forecasts, we believe there is a 71 percent
chance of rain. The drop in percentage is because station two
said it would not rain, and station two is slightly more accurate
than station one.
Nevertheless, notice something interesting: the belief only
changed by a little bit, even though both stations offer reliable
forecasts.
What this means
The example showed that when two experts disagree, it is not the
case that your updated view should be a 50/50 chance to either
opposing viewpoint. The lesson is this: the two viewpoints
somewhat cancel each other out—what you’re left with is a
slight update towards the more reliable expert.
What’s missing in mainstream media is the most important piece
of information: the belief you had before you asked the experts,
also known as a prior belief. Unless you are completely
uninformed, you will likely come into the discussion with some
educated guess. And if experts disagree, then you can revise
your guess, but only by a little.
The math suggests the following rule of thumb. When experts
disagree, search for more information. If both sides are equally
reliable, you may be right to go with your gut.
Now in all of this I have left out the question of where this prior
belief comes from. Most of the time you come into situations
with some sense of what is going on. You can have a better
estimate if you educate yourself about history and tradition to
place your prior beliefs in line with understood phenomena.
Note
[1] The problem is exercise 1.6.5 from Probability, Markov
Chains, Queues, and Simulation by William J. Stewart.
Base Rate Fallacy
The illusion: We misjudge probability by neglecting the general
circumstances that might mitigate the importance of specific
information.
The rational response: Think about the circumstances that
might have led to the outcome and judge if the result is really
exceptional. Calculate the conditional probability (use Bayes
rule).
Examples
Is an accurate witness always a good witness? This is a topic
raised in the famous taxicab problem posed by Amos Tversky
and Daniel Kahneman in a 1982 study [1]. The problem is
similar to the false positive bias. Here is the setup.

A city only has blue and green cabs, of which 85% are
green and 15% are blue. A hit and run accident happens
during the night. A witness testifies to having seen a blue
cab. In similar conditions, the witness could correctly
identify the color 80% of the time and would fail 20% of
the time. What is the probability the cab was blue, rather
than green, given the witness said the cab was blue?
The gut instinct is to think the witness is 80 percent accurate.
That answer would be correct if there were an equal number of
blue and green cars. But since most cabs are green, that makes it
more likely the witness actually did see a green car and
misidentified it as blue.
In other words, we need to take the proportion of cabs into
account when making this calculation. It turns out that there is
only a 41% chance the witness who allegedly saw a blue car was
correct in the car being blue.
Here is the mathematical calculation. To start, we can imagine
the total population of cabs as blue and green.
Out of a 100 cabs, we would have 15 blue ones and 85 green
ones.
The next step is to figure out the proportion of cabs correctly and
incorrectly identified. The witness is 80% accurate, which means
80% of blue cars will be identified as blue, and 80% of green
cars will be identified as green. The remaining cars will be
incorrectly identified.
Of the 15 blue cabs, the witness correctly identifies 80%, which
is 12 cabs, and misidentifies the remaining 20%, or 3 cabs as
green.
Of the 85 green cabs, the witness correctly identifies 80%, which
is 68 cabs, and misidentifies the remaining 20%, or 17 cabs as
blue.
Finally, let us only focus on the cars that were identified as blue.
This comes from the blue cars correctly identified and the green
cars incorrectly identified.
There are a total of 29 cabs identified as blue, of which 12 were
blue and correctly identified, and 17 were green and
misidentified.
This means if the witness says a cab is blue, the cab actually is
blue in only about 12/29 ≈ 41% of the cases. The remaining
17/29 ≈ 59% of the cases are false positives, where the cab is
green and misidentified.
It is surprising that an accurate witness might not offer accurate
evidence. That is the negative way of viewing the result. The
positive aspect is the witness increased the likelihood of the car
being blue to 41%—which is nearly 3 times the base rate of
15%. The witness does matter, but the effect is not as strong as
people expect.
The problem is meant to illustrate that we should always place
the accuracy of information in relation to the circumstances of
the environment. Even an accurate witness will have a hard time
avoiding false-positives.
Note
[1] Tversky, Amos, and Daniel Kahneman. Evidential Impact of
Base Rates. No. TR-4. Stanford University Department of
Psychology, 1981.
Excessive Risk Aversion
The illusion: The fear of a loss can outweigh the high expected
value of a gain.
The rational response: Take gambles with a high expected
upside and avoid those with too much downside.
Examples
Would you rather have $1,000 for sure, or gamble with a coin
flip that you could get $2,000 or $0 with equal odds?
Rational agent theory supposes people are risk neutral. Since
both options have the same expected payout of +$1000, you
would be happy with either choice. But we prefer certain gains
over expected gains (we are risk averse), and that means we tend
to prefer the guaranteed $1,000 payout versus a gamble with the
same expected value.
Risk aversion is about taking a guaranteed outcome versus
facing a similar risky gamble. It is the amount of the guaranteed
outcome that indicates how risk averse someone is. Consider the
gamble of having a coin flip to get $2,000 or $0. How much
money would you rather have for sure instead of taking this
gamble?
Obviously you’d rather have $2,000 for sure, and $1,500 for
sure, and most likely even $1,000 for sure. These are all
situations where you’d be getting at least the expected value of
the gamble. This level of risk aversion is sensible and
understandable. What about $900? Now you probably have to
think. Accepting $900 means you are taking less than the
expected value. You have to weigh your options of winning big
versus going home with nothing.
We can repeat the exercise for smaller guaranteed amounts.
What about $800, or $500, or $100? At some point you would
not take the guaranteed money and you would opt for the
gamble. The smallest amount where you would take the gamble
over the guaranteed money is an indication of your risk aversion.
People with a higher level of risk aversion are willing to take
less guaranteed money for a given gamble.
The extreme case is someone who is infinitely risk averse. This
is a person who will accept even $1 over the coin flip of $2,000
or $0. In other words, infinite risk aversion considers worst-case
scenario.
If you view life as a series of gambles, then probability suggests
you should consider gambles with positive return (and low to no
risk of catastrophe). While you will lose some gambles, over
time you will win more and can expect a healthy gain.
Risk aversion is the idea that a bird in the hand is worth two in
the bush. This can be a good thing in leading us to conservative,
safe decisions. For instance, risk aversion is why we generally
prefer higher salaries over the chance of performance bonuses
and why we opt to pay insurance premiums to protect against
catastrophic loss.
But too much risk aversion can be bad as well. People who pay
for extended warranties on cell phones, electronics, and
treadmills often overpay for the relatively small risk of product
failure. Another example is people who hold too much cash and
bonds instead of investing in stocks which are risky but have
generally higher returns.
Winner’s Curse
The illusion: We overpay when competing for items with a
common value.
The rational response: Recognize when the winner’s curse
might happen and shave your bid so that you would not overpay
when you win.
Examples
Imagine an economics professor holds up a jar of quarters for
auction to a classroom. Each student secretly guesses the value
of the coins, and the professor will sell the jar for the highest bid.
What can we expect out of this auction?
If students are guessing honestly, typically the average of the
guesses will be very close to the actual value of the coins.
Averaging the guesses tends to average out errors and be
accurate, a phenomenon dubbed “the wisdom of crowds” [1].
This turns out to be bad news for the auction winner. If the
average guess is close to the true value of the coins, that means
the highest guess tends to exceed the true value of the coins. The
winning student has bid too much for the jar and will tend to lose
money.
You do not want to win this auction! That is the winner’s curse.
When the winner’s curse happens
When you bid for a house or shop on eBay, you usually have a
personal, private value to the item you want to buy. You can
safely evaluate your worth for an item and be sure to bid less
than that. You might overpay in the sense you could have paid
less, but you can avoid bidding more than you value the item.
The winner’s curse happens in auctions where the item has a
common value. The jar of coins, for example, is worth the same
monetary value to everyone. When you bid, you are guessing its
value, and the person who guesses too high will win and thereby
overpay for the coins.
Many auctions have common value components. Think about a
sports league auctioning off television broadcast rights. The
value of the broadcast depends on viewership level and interest
of advertisers, both of which are commonly valued by television
networks. Or think about a sports player in free agency. The
player’s expected contribution to each team is similar and like a
common value. And in fact, in these auctions, television stations
have been known to overpay for broadcast rights and teams have
been known to overpay for a star free agent.
In common value auctions, there is a selection bias that the
winner overestimates the value of the item, so the winner tends
to lose money on average.
One way bidders can avoid the curse is by bid shaving, which
means to reduce a bid to account for the risk of overpaying.
There are mathematical methods to calculate the percentage of
bid shaving, but it depends on many assumptions about other
bidders and their bidding strategy. In practice you might want to
reduce your bid by a percentage to avoid overpaying. The
students who bid on the jar of coins might reduce their guesses
by 20 percent, for example, to reduce the chance the winner
overpays.
Note
[1] Treynor, Jack L. “Market Efficiency and the Bean Jar
Experiment.” Financial Analysts Journal 43.3 (1987): 50-53.
5. Consistency And Social
Norms: Question The Rules
Which way are the arrows pointing in the following figure?

If you focus on the black space, the arrows are pointing to the
right. But if you focus on the white space, the arrows are
pointing to the left.
Many times in life we face situations where more than one
perspective is valid. In society there is often a benefit if everyone
agrees to a particular rule, and customs are borne out of
necessity for efficiency. If the arrows were directing traffic, for
example, you would want everyone to agree on a convention to
avoid collisions.
There are other arbitrary rules that become adopted for
efficiency. Business calendars around the world follow the same
solar calendar system that dates back to the Roman Empire.
Would our lives really be much different if we followed a lunar
calendar system instead, or devised a new calendar system?
Clearly there are benefits when we can agree upon a convention,
and over time the agreement becomes a tradition. It would be
very costly and difficult to change the calendar system now.
But does that mean the current system is the best one? Not
necessarily. Some traditions happen by luck or for historical
reasons. People mistakenly think social norms must be great
because they became the custom. There is a tendency to conform
to group behavior.
This chapter is about how you should pay more attention to the
rules of the game and see if you can make a better decision.
Many times people make suboptimal decisions for lack of
thought or for lack of courage to do something different.
In the arrow illusion, we recognize that multiple interpretations
are valid, and we allow people to appreciate the picture in
whatever way they like, or in both ways. When you spend your
money, you should be able to spend it in legal ways that bring
you happiness rather than living by sometimes contrived social
customs.
The Wrong Average
The illusion: The word “average” is often interpreted as being a
representative sample.
The rational response: Many measures of central tendency are
called “average.” Learn which measure is being used in each
setting and the limitations of each measure.
Examples
Here are some of the common statistics that are used when
people say “average.”
1. Simple average / arithmetic mean
An example of the simple average is the statement: “the average
adult weighs 180 pounds.” The simple average is the most
common meaning for the word average. The simple average is a
great way to summarize a pattern when individual data points are
in the same ballpark of each other (not too many outliers).
The simple average is calculated by adding up all the data points
and then dividing by the number of data points.
2. Median: the center point
People often say the word median, like “the median reported
income was $76,000.”
The simple average is flawed because extreme points can swing
the average away from the center. If 9 people owned a $76,000
home, except one person who owned a $1 billion home, the
simple average would be pushed up closer to $1 billion than to
$76,000, which is the value of most homes. The median offers
the middle point of the data so it will not be affected by a few
extreme values. It is most often used for distributions like
income and housing where extreme observations would bias the
average.
The median is calculated as the middle point in a ranked list in
order (or the average the two middle points if there are an even
number of data points).
3. Mode: the most prevalent
The mode is commonly used in advertising with claims like “our
brand was the most often preferred by dentists.” The mode
indicates the most popular choice. Which NBA jersey sold the
most? Which company did most new employees choose? These
are examples of the mode.
The mode is calculated as the data point with the highest
frequency.
4. Weighted average
Stock market indices are examples of a weighted average, like
“the S&P 500 closed at 2,013.” A simple average gives an equal
weight (1/n) to all data points (sample size n). The weighted
average allows you to give each data point a different weight.
Why do that? The idea is “important” data points can have more
influence. For instance, if you invest 90 percent in stocks and 10
percent in bonds, then your overall return will depend more how
on stocks than bonds. You can find your overall return by
multiplying your stock return by a 90% weight and then adding
your bond return multiplied by a 10% weight.
The weighted average is calculated by multiplying each
observation by a “weight” (between 0 and 100 percent, and all
weights add up to 100 percent), and then adding up the terms.
5. Geometric mean
If someone says their investments returned an average of 5%
annually over 10 years, the 5% annualized return is an example
of a geometric mean.
Here is an example to illustrate. Let’s say you invest $100. It
becomes $105 a year from now (5 percent return), and then
$115.5 in two years (10 percent return). What was your average
annualized return? You might be tempted to say 7.5 percent
which is the simple average. This is very, very close to, but not,
the correct answer. The reason why is that the second year return
is multiplied on top of the first year’s—there is a cumulative
effect. We need to use the geometric mean to correct for the bias.
The correct answer is 7.47 percent.
The geometric mean is a bit more complicated to calculate. If
there are n years of returns denoted by r2, r2, …, rn, the
geometric mean is:
[(1 + r1)(1 + r2)…(1 + rn)]1/n - 1.
The idea is we want to find a single growth rate, which if
compounded for n years, would give the same final answer as
what our investment actually did return.
6. Harmonic mean
The harmonic mean is useful to calculate the average of two
rates. For example, let’s say I drove to work at 60 miles an hour.
Due to traffic, my drive home averaged just 40 mph. What was
my average speed for the entire trip?
It’s tempting to say the arithmetic average of 50 mph. But this is
wrong because we need to account for the time and distance of
each trip: the trip home took more time, and hence it will have
greater impact on the overall average speed. The proper way to
calculate the average speed is to use the harmonic mean, which
is 48 mph.
The harmonic mean is the 1 divided by the sum of the
reciprocals multiplied by n. For two numbers x and y the
harmonic mean is 2/(1/x + 1/y) = 2(xy)/(x + y). For the speed
example, the harmonic mean is 2(2400)/(60 + 40) = 48 mph.
For n observations, the harmonic mean can be calculated from
the formula n/(1/x1 + … + 1/xn).
Priming Effect
The illusion: Seemingly irrelevant stimuli can influence how we
behave for another activity. We can be primed with information
that leads us to strange actions.
The rational response: Be more aware of your surroundings
and factors that might change your behavior. Use priming to
your advantage by thinking about saving money before making a
purchase.
Examples
Here is a fun experiment to try. Have a friend spell the word
POTS out loud, saying each letter, P, O, T, S quickly. Then ask
the friend to spell POTS out loud a few more times. Then ask:
what do you do at a green light? Most people instantly blurt out
“STOP.”
This is an example of the priming effect, a tendency to act based
on information already in the mind. A person who is spelling the
word POTS is repeatedly saying the letters that make up the
word STOP. When asked a question about a traffic light, the
natural response is to say STOP, even when everyone knows you
would GO at a green light.
Here is another question to test your mental math skills. Add up
the following numbers by reading the words out loud step by
step.
1000
20
30
1000
1030
1000
20
What did you get?
Most people, myself included, would say the answer is 5,000.
But do the math. The actual answer is 4,100. What happens is
each time you add a number, you get a result that starts with the
word “one-thousand.” At the very last step when you need to
carry over the result, the brain is primed to increase the
thousands value for the result of 5,000.
The priming effect may play a role in our purchase decisions.
Think about how many times people spend because they get the
idea that spending is appropriate behavior. One experiment
found using the word “sale” for items in a mail-order catalog
drove demand up by more than 50 percent, even if the “sale”
price was the same as the regular price [1]. The word “sale”
might prime you to think an item is a better bargain than it
actually is.
It is important to be aware of the priming effect since seemingly
irrelevant details might lead to automatic spending.
Note
[1] Anderson, Eric, and Duncan Simester. “Mind Your Pricing
Cues.” Harvard Business Review 81.9 (2003): 96-103.
Expensive Expectation
The illusion: We are willing to pay more if an item comes from
a place we expect to pay more.
The rational response: Think about the item and where it is
being consumed. Imagine buying the item at a regular store and
set that price as your willingness to pay.
Examples
How much are you willing to pay for a fountain soda in a food
court?
Many years ago I was at the National Mall in Washington DC.
My friends and I split up at the food court to get our favorite
items. I was about to add on a soda for $2, but the price seemed a
bit high. There were many restaurants around; might I get a
better deal?
I walked around and was surprised to see a lot of variation, with
the more expensive places generally charging more. That might
make sense when dining in a restaurant and paying for ambience
and service. In a food court everyone eats on the same food court
tables.
I found the lowest price at an ice cream shop. Perhaps it had to
discount or else few people would buy soda with ice cream.
Why didn’t more people comparison shop? Why were people
willing to pay more at the more expensive restaurants for pretty
much the same soda?
Research finds that people are willing to pay more if an item
comes from an expensive place. Richard Thaler demonstrated
this in 1985 using the following scenario [1]. He asked people to
imagine they were on a sunny beach. A friend would pick up
beer from town for them. What is the most they would pay for
their favorite brand of beer? The friend would only get the beer
if the price was below that amount.
The answer should depend on how much a person valued
drinking a cold beer on a sunny beach. It should not depend, for
example, on where the beer was purchased. Surprisingly people
were willing to pay an average of $2.65 if told the beer was
coming from a fancy resort hotel but only $1.50 if told the beer
was coming from a grocery store.
In both cases the person was enjoying a cold beer on a beach. If
the beer was $2, the data suggest people would feel the price is a
“rip-off” from the grocery store but a “bargain” from the fancy
resort hotel.
It is understandable beer would cost more in a fancy hotel. Hotel
bars charge for service, ambience, and their convenience value to
guests. But in the experiment the beer would be enjoyed on the
beach, so it shouldn’t matter where it comes from—the amount
you are willing to pay should be the value of drinking on the
beach.
Most of us have a sense of how much an item should cost, and
we should stick to that when determining our willingness to pay.
Note
[1] Thaler, Richard. “Mental Accounting and Consumer
Choice.” Marketing Science 4.3 (1985): 199-214.
Sunk Cost Fallacy
The illusion: We continue to spend time or money on a bad
option because we previously expended time or effort into that
option.
The rational response: Make the best decision looking forward.
Accept a previous bad decision was made, and then move on to
make better decisions.
Examples
A sunk cost is an expense that you cannot recover. You might
have spent a lot of time and money doing something but now
that effort has no value.
A rational agent makes the best decision going forward. What
options are available now, and what is the cost/benefit of those
decisions? Sunk costs are unrecoverable so they do not impact
marginal decision-making. In other words, sunk costs should be
ignored.
But time and again, we are emotional with our purchases and we
factor old memories when we make new decisions, known as the
sunk cost fallacy. Here are a few common and real-life examples
of the sunk cost fallacy.
1. “I came all this way, so I might as well buy something.”
I overheard this at an orchid shop. Some people were
disappointed they did not find anything they liked, particularly
because they had driven a long way to visit the shop. They felt
compelled to buy something since they had made the trip.
However, the time of the trip was a sunk cost and should not
have mattered: it was only relevant whether they wanted to buy
something of value.
The same tactic is perhaps used by outlet malls which are located
far away. Customers who drive long distances are not in the
mood to go home empty-handed, so they end up buying
something they may not really need.
2. “I don’t need to finish this beer—it was only $2. Let’s go to
the next place.”
My friend said this in a New York City bar. We were grabbing a
beer before dinner and found an incredible special of $2 pints,
about a third of the normal price. Some people finished their
beers early and were ready to go. My friend left his beer a
quarter-full, joking the beer was so cheap he was fine wasting it.
Of course, the cost of the beer was a sunk cost. It only matters if
he wanted to drink it or not (which he clearly did since he was
ready to drink more!).
3. “I need to get my value’s worth at a buffet.”
The cost of the buffet is a sunk cost. You should not base how
much you eat on how much you paid. Instead, you should base it
on whether you will enjoy the next morsel of food.
4. “I bought a voucher for yoga classes. Then my friend took
me to a yoga class and I didn’t like it. But I might as well
take the class since I paid for it.”
People often buy things on daily deals sites and then find they
actually do not care for the activity. The voucher should be a
sunk cost, but many people complete the activity because it is
pre-purchased.
5. “I fixed up my 10 year old car for $500. A week later I
discovered another problem that will cost $1,000 to fix. Since
I decided last week that I wanted to keep my car, I’m going
to pay for the repair.”
What was spent on the car before should not matter. The
decision to repair is supposed to be a forward-looking decision
that compares things like the cost of repair, cost of a new car,
and salvage value. Nevertheless, people often do too many
repairs to be consistent with a past decision to repair.
6. “I hate the treadmill we have at home, but I’m not gonna
pay for a gym membership.”
It doesn’t matter what you paid for a home gym. You should be
deciding on whether a gym membership is worth the cost versus
the benefits of exercising there. Too many people don’t exercise
because they don’t like their home equipment. Sell or get rid of
the old junk so you can avoid the sunk cost trap.
7. “Are you trying to rob me? I paid good money for this
lawn furniture when I bought it!”
I heard this at a garage sale where the owner was moving. The
buyer saw the owner was desperate and asked to haggle on price.
The lawn furniture was out of style and had dropped in price
since he bought it. But he was not willing to part with it because
of the price he paid, which was a sunk cost.
Sadly the market value for the furniture was not very good.
Better to sell it—which he did angrily—than to hold on to it
because of a high price paid before.
Money Comparisons
The illusion: We are afraid to be labeled “cheap” if we spend
money differently than others.
The rational response: Spend your money according to your
preferences.
Examples
Pain management is personal. Doctors generally ask patients to
rate their perceived pain on a scale of 1 (no pain) to 10 (highest
pain). Doctors do not simply rely on formulas and their own vast
experience of treating patients. The doctor lets the patient
convey their experience.
Why can’t we use that logic for managing our money? Managing
money should be personal. No matter what other people do with
their money, it is better that your own circumstances and
preferences guide your spending habits. This sounds obvious but
we all experience times when we make decisions by comparing
with our friends.
Years ago my friend was considering buying an iPod. We were
discussing the pros and cons. He was on the fence, but he was
leaning toward buying it. Here are some of the reasons he
offered:

“My friends love it and tell me I will find uses for it.”

“My friend told me not to be cheap.”

“My friends bought one, so I can afford it.”


I wanted to tell him that he is really thinking the wrong way—it
is better not to make comparisons. It is a trap to use other
people’s reasons for your money. And I am sure he would have
agreed with me. But I doubted the message would have stuck.
After all, it is natural for us to make comparisons because no one
is an island. We are bound to have friends and family that
encourage bad spending habits. We cannot always control our
environment. The tendency for one-upmanship might lead to
spending more to outdo others. In fact, a study found that men
actually do tend to spend more when they shop with other men
[1].
I try not to rely too much on other people’s habits. Before I make
a money comparison, I do one thing: I consider if the
comparison group is good with money.
We often use our coworkers and friends as models because they
are available to us. We fail to ask if they are actually responsible
financially. If you find yourself thinking an item is affordable
because others have it, consider a few questions about other
people.
Do they have credit card debt because they live beyond their
means?
Are they saving enough for retirement?
Do they save in other areas where you spend more? Say, do they
cook at home while you go out regularly as a foodie?
After this exercise, I am reminded how many people struggle to
pay for their lifestyles, so I tend to ignore pressure from
financially irresponsible peers.
Note
[1] Kurt, Didem, J. Jeffrey Inman, and Jennifer J. Argo. “The
Influence of Friends on Consumer Spending: The Role of
Agency-Communion Orientation and Self-Monitoring.” Journal
of Marketing Research 48.4 (2011): 741-754.
Untested Preferences
The illusion: We profess brand preferences when we have not
tested them.
The rational response: Do a proper blind taste test and see if
you really prefer one product over another.
Examples
Do you have strong preferences for food and drink? One of my
friends insists on drinking high end vodka. Another prefers the
taste of brand name cereal. And someone else drinks organic
milk primarily for its taste rather than for health reasons. These
specialty products can run up to double the cost of generic
counterparts. Are they worth it?
There is nothing wrong with spending money for things you
want. The question is, are you really getting value, or are you
paying for something that’s in your head?
I personally save money by buying generic foods and mid-tier
“value” brands. The point is not that I am saving money by
buying cheaper products. The point is I am not settling: I have
tested out the products, and I am buying the cheapest version
that satisfies my tastes.
Some of you will say that brand names and higher class versions
are better, and that you can tell the difference. Research suggests
the average consumer is overconfident in claims of brand
preference. Studies have found that few consumers can
accurately identify differences between brands like Coke/Pepsi
or Budweiser/Miller, or that many taste tastes have a flawed
design [1].
My biggest pet peeve is someone who claims to have tested their
skills unscientifically. The story usually goes as follows. At
some point the person asked a friend to pour out two samples in
unmarked glasses. After tasting, the person identified his
preferred item which happened to be his favorite brand.
“Aren’t you impressed?” the person asks me.
Hardly.
The test was too easy: I point out there was a 50/50 chance of
guessing correctly by luck. I’m no more impressed than if
someone were to toss a coin and correctly predict it landing on
heads.
The point is most people have never honestly tested their ability.
Those who have tested with a simple paired blind taste test are
equally unconvincing: the test is not powerful enough.
The way I look at it, if you think a brand name merits its extra
cost, then you owe it to yourself to prove that claim
systematically. You should put your tastes to the test and see if
you are spending your money wisely. And that requires a test
commonly used in the food industry.
The triangle taste test
Most people associate a blind taste test with a paired tasting.
This is the classic test: you have two unmarked samples of A and
B, and you have to identify your preferred brand.
This is an easy test to conduct, but it is also easy to guess
correctly.
The food industry uses another blind test known as a triangle
taste test. In this you have three samples: two samples are the
same and one is different. The task is to identify which sample is
different.
For example, here is how a triangle test would work with Coke
and Pepsi. You would be faced with three unmarked glasses. It
might be two Pepsi samples and one Coke, or it could be two
Coke and one Pepsi. Either way, your job is to identify which
sample is different.
This is a much harder test than the simple paired tasting. For
one, it is harder to guess correctly. As there are three glasses,
there is a 1/3 chance of guessing by luck. The test is also harder
because there are three samples. It takes more skill to
discriminate between products that have similar tastes.
The triangle test is harder than it sounds, and I suggest you give
it a try. So far I have yet to find anyone that passes this test for
colas or liquors.
Colas vs colors
Some people tell me the test is stupid. They say I have tricked
them by putting things in a particular order, and that is why they
cannot figure it out. Or they say they had sensory overload so it
was hard to distinguish the samples.
While some people do have highly refined palates to
differentiate similar chemical compositions, as sommeliers can
do with wine, most people do not.
Consider an analogous triangle test for identifying colors. If I
handed you three pieces of paper, two colored blue and one
colored orange, you would never make a mistake. You could
always tell me which one was different. Even if I made the
colors closer like navy blue and regular blue, you would
probably have no problem.
Trying to identify Coke versus Pepsi is considerably harder.
Discerning these by taste is like trying to identify the difference
between the colors sapphire and navy blue at a distance of 100
meters. It is possible, but it is not easy.
A blind taste test procedure
Consider this procedure I have adapted from a paper on blind
taste tests [1]. You will need a friend to pour the samples. You
could also get another friend, ignorant of the pouring, to present
the samples and record the data (so the friend who pours does
not inadvertently tip you off as to which sample is what).

1. Do a triangle taste test (from a set of 3, where 2 are the


same): see if you can identify which one is different. All
samples should be in unmarked glasses of the same size.

2. Do a paired blind taste test (comparing brand A and B in


2 glasses) and test which one you prefer the taste.

3. Repeat the paired blind taste test again to test if you liked
the same brand two times in a row.

4. If you identified correctly in all three tests, and you still


preferred one brand, then you pass.
In other words, you combine a triangle test with two different
paired blind taste tests. The triangle test is about seeing if you
can distinguish small sensory differences, and the paired tests are
about verifying how consistent you are in your tastes.
To pass all three tests is not easy by chance. You have just a 1/12
chance of guessing your way through (the triangle test is a 1/3
chance, and each paired test is 1/2 chance). The results can be
humbling.
Quantify your preference
You should consider a blind taste test when you are comparing
products purely on taste with big price differences.
Which taste did you like?
If you liked the more expensive brand, was it really worth it?
I hope you will be able to find ways to save money on brands by
tasting them out. In the few years I have tried this I have found
time and again that generics are almost the same taste for much
less money. I have saved money on food across the board by
experimenting. But in some cases I will buy the brand name, and
I will be content to pay extra for it.
A caveat about blind taste tests
Now I’ll admit there is a flaw with this test. Some foods may
taste better or the same but not have the same food quality. For
example, it might be hard to detect the partially hydrogenated
oils in regular peanut butter versus “natural” peanut butter. This
is partly because the non-natural food version is created to
mimic the taste of the natural version.
When the primary concern is health, I go with health research as
a guide. But most of the time I am focused on taste, and that is
when the triangle blind taste test is a useful guide.
Notes
[1] Priya Raghubir; Tyzoon T. Tyebjee; Ying-Ching Lin. “The
Sense and Nonsense of Consumer Product Testing: How to
Identify Whether Consumers Are Blindly Loyal?” Draft (2005)
can be accessed online at
http://goldberg.berkeley.edu/courses/S06/IEOR-170-
S06/docs/CMR_submission1.doc
I read about the triangle taste test and sensory overload in
Malcolm Gladwell’s book Blink.
Speeding
The illusion: People think driving fast saves a lot of time.
The rational response: Do the math: speeding does not save
much time, and it increases the risk of a collision.
Examples
Let’s begin with a mathematical example. Bob has a simple
commute: he drives 60 miles on a highway. Normally Bob drives
with the flow of traffic at 70 miles per hour.
One day he is anxious and instead speeds at 80 mph. How much
time does he save? How much time would he save at 90 mph?
We can solve for the time savings using the relation that
Distance = (Speed)(Time). Actually we will modify the formula
to solve for time in minutes, so we have Time(minutes) =
Distance / (Speed in mph/60)
At 70 mph, Bob takes 51.4 minutes; at 80 mph, Bob takes 45
minutes; and at 90 mph, Bob takes 40 minutes.
As you can see, Bob saves about 10 minutes by driving at an
excessively high speed of 90 mph. Speeding is not a big time
saver.
Even though speeders save little to no time for themselves, their
decision does increase the risk for themselves and all drivers.
According to a Department of Transportation publication in
2014, speeding related accidents cost over $40 billion per year,
and over 10,000 lives were lost in speeding-related crashes [1].
But there is one caveat to the discussion. If enough people ignore
the advice and speed, then the norm of speeding actually
becomes the safe thing to do.
Coordination games
Consider a few common decisions. You’re in a car: do you drive
on the left or right side of the road? You have lost your friend at
the bar and you have no cell service: where do you meet? You
and a partner are creating a presentation and will need to merge
slides later: which software do you use?
In each case, there is no “right” answer. The correct decision is
that you want to match the choice of the other parties and vice
versa. A game of coordination is one where the players want to
match the same strategy.
You drive on the right side in America because that’s what
everyone else does, and it would be extremely dangerous for you
to do anything else. But in England you would naturally drive on
the left side. Similarly, there is no correct place to meet your
friend. All that matters is you both think of the same spot. And
you can merge slides if you both use PowerPoint, Keynote or
LibreOffice.
The nice part about games of coordination is they are easy to
analyze: any strategy profile with matching strategies is a
solution (a Nash equilibrium). Occasionally some choices are
“natural” decisions and make coordination easier; these choices
are known as focal points or Schelling points.
But often there is not a natural choice, and the existence of
multiple equilibria makes offering a prediction very difficult.
Multiple equilibria in “safe” driving
To illustrate an extreme example of a coordination game, think
about the following situation. You’re in India and driving the
streets of Mumbai after midnight. You approach a stop light that
is red. Do you slow down and stop?
The answer might depend on the current custom. When I visited
in 2012, I saw many drivers in Mumbai treat the red light just as
many cyclists in America treat stop signs. If the streets are
empty, they might slow down to check if anyone is around, and
then they will zoom by. You might think this is wrong to break
the law. However, the decision is motivated by the coordination
game. In Mumbai some drivers routinely run red lights late at
night. If, opposite to the custom, you slow down and stop, a
driver behind you might not expect it and run into you. At a
minimum the driver might honk at you for being “stupid” for
stopping when the cross-traffic is empty. In other words, when
running red lights at night is the custom, stopping at a red light
could be considered the “wrong” choice.
Even worse, such behavior can be self-enforcing. If other drivers
are reckless, then you have to be reckless too, as it would be
even more dangerous to try and drive cautiously!
Multiple equilibria in the speed limit game
In driver’s education class, we learned about the physics of
speeding. We need more time to slow down in a faster car; we
move a longer distance before we can even hit the brake (effect
of human reaction time); and we increase the risk of a fatal
collision with others—especially pedestrians.
But we could have equivalently learned about the economics of
speeding. We should be able to drive faster on roads designed for
higher speeds, and the time saved would probably compensate
for the ticket fines. Cops can only write up a fraction of traffic
violations, and the net effect would be to increase the flow of
traffic. If enough people speed, maybe the law would change for
higher speed limits.
The existence of multiple equilibria means it is safer when
everyone follows the same rule, at a higher speed or not.
In America it is natural that many people speed on the highways,
and they find lots of support for the behavior. I think if people
did the math on how much time they saved, and the extra risk,
they would take a moment to slow down. And the slower speed
would make safe driving even safer if everyone agreed to slow
down.
Note
[1] U.S. Department of Transportation, National Highway
Traffic Safety Administration, National Center for Statistics and
Analysis, (May 2014). “Traffic Safety Facts, 2012 Data.”
http://www-nrd.nhtsa.dot.gov/Pubs/812021.pdf.
Volunteer’s Dilemma
The illusion: We ask others to sacrifice for a common goal even
when it is against their own interest.
The rational response: Consider the best choice for your
interest, and do not be persuaded to let others enjoy a benefit
while you have to sacrifice.
Examples
We start with a thought experiment. The following choice is
based on an actual exam question from the University of
Maryland by Professor Dylan Selterman [1].

Here you have the opportunity to earn some extra credit on


your final paper grade. Select whether you want 2 points or
6 points added onto your final paper grade. But there’s a
small catch: if more than 10% of the class selects 6 points,
then no one gets any points. Your responses will be
anonymous to the rest of the class, only I will see the
responses.

—2 points
—6 points
How would you answer?
Game theory
In decision theory, your decision to buy an extended warranty or
lease a car is an individual choice. You weigh the costs and
benefits of the decision and choose whether to do it.
Most decisions in life are not so simple. They also depend on
what others will do. Your choice to buy Apple products depends
on whether others will buy them too, which will lead to a larger
market for accessories, more demand for apps, and more hotels
and cars having iPhone compatible chargers. You might actually
prefer Android devices but you find Apple products are a better
decision overall for this reason. Your choice depends on what
you do as well as what others do.
Game theory studies such situations of interdependent decision-
making. Game theory helps you make the right decisions
because you can identify the strategic incentives. And ultimately
the knowledge can help you design better mechanisms so
everyone can benefit.
The Nash equilibrium is a common way to evaluate the solution
of a game. It happens when each person is making the best
choice relative to what everyone else is doing. No one can
individually change and profit. So let’s return to the exam
question.
Your logical thought process on the test
Many people think, “I’ll just take 2 because if everyone does this
we all get 2.” But that’s not a sufficient analysis of the game.
You have to think about your best choice relative to what others
can do. If everyone else is picking 2, why wouldn’t you pick 6
and get extra points?
So let’s analyze the problem carefully. We’ll consider your
choices and they relate to what others might do.
You have two choices: you can pick 2 or you can pick 6. The
result depends on what other people do. The crucial detail is how
many people are picking 6. So let’s categorize the actions of
everyone else in that regard. There are essentially two main ways
that everyone else can pick.

A) More than 10% pick 6.


B) Less than 10% pick 6 (even after your pick).
In option A), too many people have picked 6, so everyone is
getting 0. You will get 0 regardless of whether you pick 2 or 6.
In option B), if you pick 6 then you will get 6 points. This is
clearly the right choice as you will get more points.
In other words, in options A) and B) it is clearly smart to pick 6.
If everyone thinks this way, however, then everyone will pick 6,
and everyone will end up with 0 points. It feels like individual
greed leads the group to a bad outcome, which is why people
feel the question is evil. I will get to the moral part in a bit. But
for now, let’s summarize that this is one Nash equilibrium: each
person picks 6 and everyone gets 0.
Another possible outcome
There is one more way this game can play out. Suppose that just
the right amount of people pick 6:
C) Exactly 10% pick 6 (or as many as possible without
exceeding).
For example, in a class of 10, suppose one person picks 6 and
everyone else picks 2. The person who gets 6 is happy to get 6.
And each person who picks 2 is content with this outcome: if
any person changes to 6, then that person (and everyone) ends up
with 0 instead.
In other words, if the maximum amount of people pick 6 and
everyone else picks 2, then no one can individually do any better
given what others are doing. This is also a Nash equilibrium.
In summary, there are two types of pure strategy Nash
equilibrium: one where everyone picks 6, and another
“Goldilocks” solution where exactly the maximum number pick
6 and everyone else picks 2.
The second category is a kind of magical solution because it is
hard to imagine people exactly coordinating with no
communication.
Relative payoffs
The above analysis assumes that each person wants to maximize
points. This would be a correct assumption if exams were based
on absolute point threshold: say 90 points for an A grade, 80
points for a B grade, etc.
College exams are typically graded on a curve. Your grade is
dependent on how well you do relative to others. So it might be
that a 70 percent gets you an A grade because everyone did
worse.
A strategic student should then maximize relative points—the
number of points you would earn more than others. If everyone
gets 2 more points, then no one is relatively better. So let’s solve
the game in terms of relative payoffs.
You still have two choices to consider: you can pick 2 or you can
pick 6. But now the game has the following payout structure.

You can pick 2 or 6.

If everyone picks 2, then everyone gets 0 extra points.


If less than 10% pick 6, then the people who picked 2 get 0
extra points, and the people who picked 6 get 4 extra points.

If more than 10% pick 6, then everyone gets 0 extra points.


If you pick 2, you are guaranteed to get 0 extra points over
everyone else. If you pick 6, there is a small chance you might
get 4 extra points. In game theory lingo, 6 is a weakly dominant
strategy.
When you have a weakly dominant strategy, you should play it.
So everyone picks 6, and the Nash equilibrium is everyone gets
0 extra points.
What actually happened
The professor explained to the Baltimore Sun that more than
10% of students picked 6, so no one got extra points [1]. The
situation is an example of a social dilemma: while there is a
chance for everyone to possibly benefit with more points, the
exact nature of the game gives each person an incentive to pick 6
points, which is ruinous for everyone in aggregate.
The question is an example of a volunteer’s dilemma: some
people have to volunteer to sacrifice so that others and the group
can benefit.
I used to wonder why more people don’t pick 2 so that everyone
can benefit. But I do not feel that way anymore. I think you
should not feel bad about picking 6, and the problem is the game
is rigged against what is individually sensible.
In real life, the powers that be will tell you to pick 2 while they
pick 6. And if you try and change things, you will face resistance
from other 2’s that think you are spoiling things.
Rather than telling people to pick 2, the group could work
towards a fairer system. Perhaps the group could agree that
exactly 10% choose 6, and then everyone would split the total
surplus of points evenly. Or if the group took many exams with
this option, they could rotate which students get the 6 points.
In a single shot version, the game theory analysis shows why 6 is
the logical choice, and no one should be blamed for picking that.
I’d say the real tragedy, if there is any, is that it’s not easy to
convince people to do what’s logical.
Make the rational choice, and help others understand the rational
choice too.
Note
[1] Quinn Kelley. “UMD ‘Tragedy of the Commons’ Tweet
Goes Viral.” Baltimore Sun. 9 July 2015.
http://www.baltimoresun.com/features/baltimore-insider-
blog/bal-umd-tragedy-of-the-commons-tweet-goes-viral-
20150709-htmlstory.html
Conclusion
The rational agent model can be comforting. The crisp
assumptions and mathematical modeling serve as an inspiration
that we can make complex decisions correctly.
It can be startling to study how we actually make decisions and
then realize that we can be tricked. In this book I explained how
we can make bad decisions, like picking a single marshmallow
instead waiting a few minutes for two marshmallows (chapter 1).
We are biased by our relative perception of choices (chapter 2),
our inability to account accurately (chapter 3), our misjudgment
of probability (chapter 4), and our desire to be consistent to past
behavior and social norms (chapter 5).
Behavioral economics points out we deviate from the rational
agent model in predictable ways. These cognitive biases exist,
and we are psychologically susceptible to them.
The optimistic view is we can learn the patterns and then avoid
getting fooled. Behavioral biases are like optical illusions, and
just as we can recognize optical illusion patterns and identify the
real image, we can recognize behavioral biases and make
smarter decisions.
Books By Presh Talwalkar
I hope you enjoyed this book. If you have a comment or
suggestion, please email me presh@mindyourdecisions.com
The Joy of Game Theory: An Introduction to Strategic
Thinking. Game Theory is the study of interactive decision-
making, situations where the choice of each person influences
the outcome for the group. This book is an innovative approach
to game theory that explains strategic games and shows how you
can make better decisions by changing the game.
Math Puzzles Volume 1: Classic Riddles And Brain Teasers
In Counting, Geometry, Probability, And Game Theory. This
book contains 70 interesting brain-teasers.
Math Puzzles Volume 2: More Riddles And Brain Teasers In
Counting, Geometry, Probability, And Game Theory. This is
a follow-up puzzle book with more delightful problems.
Math Puzzles Volume 3: Even More Riddles And Brain
Teasers In Geometry, Logic, Number Theory, And
Probability. This is the third in the series with 70 more
problems.
But I only got the soup! This fun book discusses the
mathematics of splitting the bill fairly.
40 Paradoxes in Logic, Probability, and Game Theory. Is it
ever logically correct to ask “May I disturb you?” How can a
football team be ranked 6th or worse in several polls, but end up
as 5th overall when the polls are averaged? These are a few of
the thought-provoking paradoxes covered in the book.
Multiply By Lines. It is possible to multiply large numbers
simply by drawing lines and counting intersections. Some people
call it “how the Japanese multiply” or “Chinese stick
multiplication.” This book is a reference guide for how to do the
method and why it works.
The Best Mental Math Tricks. Can you multiply 97 by 96 in
your head? Or can you figure out the day of the week when you
are given a date? This book is a collection of methods that will
help you solve math problems in your head and make you look
like a genius.
The Irrationality Illusion: How To Make Smart Decisions
And Overcome Bias. This handbook explains the many ways
we are biased about decision-making and offers techniques to
make smart decisions. The biases of behavioral economics are
like optical illusions: while we fall for them every time, we can
also learn to recognize the patterns and see through the tricks.
Fool me once, shame on you. Fool me twice…you won’t get
fooled again after reading this book.
About The Author
Presh Talwalkar studied Economics and Mathematics at Stanford
University. His site Mind Your Decisions has blog posts and
original videos about math that have been viewed millions of
times.
Table of Contents
1. You Can’t Get Fooled Again
2. The Relative Trap: Be Absolute
Percent Savings Fallacy
Anchoring
Framing Effect
Unit Effect
Denomination Effect
Discounts Versus Bonus Quantities
Sounds Misleading
Endowment Effect
Loss Aversion
Zero Risk Bias
3. Losing Track: Stay On Target
Untracked Spending
Too Many Goals
Commitment Device
Flat Rate Bias
Decoy Effect
Pre-Order Bias
Present Bias
Opportunity Costs
The Value Of Your Time
Salami Tactics
Change Blindness
4. Uncertain About Uncertainty: Calculate The Probability
Conjunction Fallacy
Selection Bias
Survivorship Bias
Confirmation Bias
Coincidences
False Positive Bias
Disagreeing Experts
Base Rate Fallacy
Excessive Risk Aversion
Winner’s Curse
5. Consistency And Social Norms: Question The Rules
The Wrong Average
Priming Effect
Expensive Expectation
Sunk Cost Fallacy
Money Comparisons
Untested Preferences
Speeding
Volunteer’s Dilemma
Conclusion

You might also like