You are on page 1of 13

Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.

Is There a Right Way to Nudge? The Practice and Ethics


of Choice Architecture
Evan Selinger1* and Kyle Whyte2
1
Department of Philosophy, Rochester Institute of Technology
2
Department of Philosophy, Michigan State University

Abstract
Cass Sunstein and Richard Thaler’s Nudge: Improving Decisions about Health, Wealth, and Hap-
piness presents an influential account of why ‘choice architecture’ should be used to ‘nudge’ peo-
ple into making better decisions than they would otherwise make. In this essay we: (1) explain
the main concepts that Thaler and Sunstein rely upon to defend their project; (2) clarify the main
conceptual problems that have arisen in discussions about nudges; (3) clarify practical difficulties
that can arise during nudge practice; (4) review the main ethical and political objections that have
been raised against nudging; and (5) clarify why issues related to meaning can pose methodological
problems for creating effective choice architecture.

Introduction
Social psychology literature states that people’s choices are influenced by their perceptions
of what others around them are doing. Social influence even can lead people to do things
that are against their best interests, such as drug use. More recently, it has been shown that
one does not even have to directly observe what others are doing to be persuaded to do the
same thing; written communication about others’ behavior can be a sufficient prompt.
A 2008 study in California suggests that this insight could be used in efforts to get members
of households to decrease their energy consumption through using fans (not air condition-
ing), taking shorter showers, and turning off unneeded lights (Nolan et al. 2008). The study
distributed doorhangers that encouraged energy conservation. As the basis for comparison,
some door hangers appealed to social influence by claiming (truthfully) that high percent-
ages of residents were conserving energy through one of the aforementioned behaviors;
other door hangers appealed to self-interest (you can save money), environmental protec-
tion (you can help reduce greenhouse gases), and social responsibility (you can help future
generations). After a period of time, the investigators then checked the energy meters of the
households to see if any changes in behavior had occurred.
Which door hanger was most influential in achieving energy savings? Although the same
research subjects in another part of the study indicated that they were less likely to be per-
suaded by social influence than by ideals like social responsibility, it turned out that more
energy was conserved by those who received the social influence door hanger than any of
the others (Nolan et al. 2008). Social influence achieved better results, though it appears that
few were aware of its power. Based on this study, a good way to decrease energy use may
be for people to be exposed – through door hangers, energy use statements and other ways
– to statistics describing their neighbors’ energy conservation behaviors. Statistics should be
truthful. They should not impose any new costs or penalties on individuals who fail to
decrease their consumption of energy.

ª 2011 Blackwell Publishing Ltd


924 Practice and Ethics of Choice Architecture

In Nudge: Improving Decisions about Health, Wealth, and Happiness (2008), Cass Sunstein
and Richard Thaler offer solutions to the personal and societal problems like energy
conservation. They suggest that instead of challenging the emotions, desires, and other
inclinations that can sabotage good decisions, progress can be made by finding innovative
ways to channel them and put their influence to good use. Specifically, they claim that
scientific knowledge of the human mind’s tendencies to make irrational decisions and
failure to anticipate changes in mood can be applied to design projects that ‘nudge’ peo-
ple to make better choices, whether or not recipients are aware of it. Telling people what
others are doing to get them to conserve more energy is thus a nudge. Nudges also range
from shrinking plate sizes in cafeterias so that people implicitly pile on less food, to
repainting roadways so as to create the illusion that drivers are going too fast (making
them slow down without thinking about it).
In a short time, Nudge became quite influential. United Kingdom Prime Minister
David Cameron recently spent £520,000 to fund the Behavioral Insight Team (infor-
mally referred to as the Nudge Unit). The team seeks to develop nudges that will
improve health, the environment, charitable giving, social networks, and the general sense
of well-being in the UK. Cameron views nudging as an excellent mechanism to ‘‘per-
suade citizens to choose what is best for themselves and society’’ (Basham 2010). Camer-
on’s predecessor, Gordon Brown, introduced ‘‘policy on Nudge-style pension schemes’’
and ‘‘nudge-like public health policies’’ (Basham 2010). In the United States, First Lady
Michelle Obama is currently running a ‘‘nudge-themed campaign against childhood obes-
ity’’ (Basham 2010). Paralleling these initiatives is an emerging academic literature that
explores the promises and pitfalls of nudging (Bovens 2009, 2010; Burgess 2010; Haus-
man and Welch 2010; John et al. 2009; Lobel and Amir 2009; Ménard 2010; Pinch
2010; Selinger and Whyte 2010; Stoker and Moseley 2010; Thaler and Sunstein 2008).
But what exactly is a nudge, and how do nudges differ from alternative ways of modi-
fying people’s behavior, such as fines or penalties (e.g. taxing smokers) and increasing
access to information (e.g. calorie counts on restaurant menus)? We open the Nudges
Defined section by defining the concept of a nudge and move on to present some exam-
ples of nudges. Though there is certainly a clear concept of what a nudge is, there is
some confusion when people design and talk about nudges in practice. In the sections
Mistaken Nudges and Fuzzy Nudges, we discuss policies and technologies that get called
nudges mistakenly as well as borderline cases where it is unclear whether people are being
nudged. Understanding mistaken nudges and borderline cases allows citizens to consider
critically whether they should support ‘alleged’ nudge policies proposed by governments,
corporations, and non-profit organizations.
There are also important concerns about the ethics of nudging people’s behavior. In
the section Ethical Anxieties and Criticisms of Nudges, we review some major ethical
and political issues surrounding nudges, covering both public anxieties and more formal
scholarly criticisms. If nudges are to be justified as an acceptable form of behavior modifi-
cation in democratic societies, nudge advocates must have reasons that allay anxieties and
ethical concerns. However, in the section Nudges and Semantic Variance, we argue that
nudge advocates must confront a particularly challenging problem. A strong justification
of nudging, especially for pluralistic democracies, must show that nudge designers really
understand how different people re-interpret the meaning of situations after a nudge has
been introduced into the situations. We call this the problem of ‘semantic variance’. This
problem, along with the ethical issues we discussed, makes us question whether nudges
are truly viable mechanisms for improving people’s lives and societies. Perhaps excitement
over the potential of nudges is exaggerated.

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 925

Nudges defined
According to dual-process theory in psychology, thinking has two basic modes: ‘reflec-
tive’ and ‘automatic’. People often rely on the automatic mode when they have to act
quickly, lack sufficient information, are missing adequate feedback, do not have the
needed experience, and find themselves caught up in unanticipated feelings or moods.
For example, in the evening we may commit to rising early the next day in order to
complete some pressing work, setting our alarm clocks accordingly. Yet, when awoken
abruptly by the alarm, sleepiness serves as a bias (a form of automatic thinking) that sends
us back to bed, undermining our previous night’s resolve. Similar biases influence the
control we give up when under the sway of a variety of agitated mental states, such as
when anger and frustration incline us to immediately send off a condescending email to
our colleagues that makes them feel as if we do not respect their work or ideas. They also
undermine the decisions we make when deliberation guides action, negatively influence
the foods we choose when we are in a hurry, unduly minimize how much of our salary
we end up saving when not prompted to save more, incline us to mindlessly surf the
internet when we should be working on a conscientious project, and so on.
Finding a way to lessen the negative impacts of automatic thinking appears to be a
worthy goal. Thaler and Sunstein favor an approach where people who are responsible for
designing ‘choice architecture’, or the conditions under which people have to make
choices, would consider what biases are likely to hold sway. The designers of choice archi-
tecture, or ‘choice architects’, are found everywhere, from alarm clock makers to exercise
and health professionals to retirement planners to bureaucrats in government agencies.
Choice architects should identify the biases that can be detrimental in select circumstances
and then create devices (e.g. alarm clocks), plans (e.g. dietary regimens and retirement
schemes), and government policies that work with the biases to advantage those who have
choices to make. Thaler and Sunstein call this activity ‘nudging’ and define a nudge as any
aspect of design ‘‘that alters people’s behavior in a predictable way without forbidding any
options or significantly changing their economic incentives’’ (Thaler and Sunstein 2008,
6). Nudges are changes in the decision-making context that work with cognitive biases,
and help prompt us, in subtle ways that often function below the level of our awareness,
to make decisions that leave us and usually our society better off.
Thaler, Sunstein, and choice architects around the world offer numerous examples of
nudges, two of which we will describe here. The first, which we referred to briefly in
the Introduction, involves Lake Shore Drive, a roadway that has stunning views of Chi-
cago’s skyline. One particular segment includes a series of S curves that require drivers to
slow down to 25 mph. Many drivers ignore the posted sign that states the reduced speed
limit. They are either easily distracted by the scenery, or else unable to assess how tight
the curve is, both of which may result in accidents. By introducing a new way of nudg-
ing drivers to slow down, the individual and societal costs of these accidents have been
reduced. Immediately after passing a warning sign, drivers encounter a series of white
stripes that are painted onto the road. Thaler and Sunstein describe this interface as a
prompt that inclines drivers to slow down:
When the stripes first appear, they are evenly spaced, but as drivers reach the most dangerous
portion of the curve, the stripes get closer together, giving the sensation that driving speed is
increasing. One’s natural instinct is to slow down (Thaler and Sunstein 2008, 38–9).
In short, the stripes work with drivers’ tendencies better than conventional signs because
they convey the point about slowing down intuitively and subtly. That is, the stripes do

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
926 Practice and Ethics of Choice Architecture

not require drivers to interpret data connecting speed and the risk of an accident, or to
figure out whether a warning sign really applies to them and their cars. Rather, at a gut
level, the stripes influence how drivers experience the turn. This way of influencing what
people experience decreases the incidence of bad decisions, thereby minimizing the costs
of accidents to both individuals and other members of society.
The New York Times selected our second example as one of the best ideas of 2010.
ToneCheck, an ‘‘emotional spell checking’’ tool, scans the content of e-mails for signs of
emotionally charged sentences and phrases. The program flags for us instances where our
messages may come across as angry, mean, arrogant, or negative, among other emotions
tracked by the program. Authors are nudged away from sending emotionally charged
e-mails on impulse. ToneCheck nudges because it preserves the person’s ability to choose
to send a particular e-mail in a timely manner, yet stops us from sending e-mails that we
do not want to send when our emotions have gotten the better of us. ToneCheck does
not fine or penalize us for having sent an e-mail that offends others or is unprofessional.
It works with the fact that we will inevitably find ourselves in emotionally charged states
that will urge us to make bad decisions. The change in choice architecture after Tone-
Check has been introduced serves to reduce the number of e-mails that we may regret
sending. This can enhance our reputation, improve our relationships, and foster a good
workplace atmosphere.1
These examples show that nudges can be quite influential in modifying people’s behav-
ior in ways they are not aware of. To ensure that such influence is not exploitative,
Thaler and Sunstein insist that choice architects design nudges that are inexpensive to
use, easy to opt-out of, function without changing financial incentives, transparent (i.e. in
principle publicly defensible), and only get designed to help people live according to their
best interests. Nudges are supposed to be effective means for choice architects to help us
satisfy ends we select for ourselves, but routinely fail to meet due to the inevitable biases
described earlier. Nudges are always choice preserving. Fines and penalties are not nudges
because they change people’s incentives and add costs to situations that were not there
before.2

Mistaken nudges
Although the concept of a nudge appears straightforward enough, designing and applying
nudges in practice can be confusing. The Behavioral Insight Team believes that one Brit-
ish program uses nudges by giving citizens free antismoking kits containing nicotine
replacement patches and voucher checkbooks with discounts on gym memberships,
swimming sessions, and other healthy activities. Similarly, a US News article, ‘How to
Make 2011 Resolutions Stick’, offers the following Nudge inspired recommendation:
One solution Thaler and Sunstein identify is the concept behind the website Stickk.com, which
was created by fellow economists. It lets people back up their goals with financial or nonfinan-
cial incentives. For example, if you want to stop wasting money on happy- hour martinis, you
can put up $100. If you achieve the goal, you keep the money. If you don’t, then you pay it,
either to a charity or to another recipient of your choice.3
While each of these examples may be a smart way to change people’s behavior, neither of
them does any nudging. The British program changes some of the costs associated with
quitting smoking and healthy activities; Stickk.com provides a way for people to attach
financial incentives to desired behaviors and creates a mechanism that holds people to the
arrangements they agreed to. Both programs bank on people’s responding rationally to

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 927

changes in financial incentives. Nudges, however, alter the choice architecture in ways
that expect people to continue to respond according to biases – though the desired result
is an outcome the agent would have chosen under circumstances where more reflection
was possible. Thaler and Sunstein themselves miss this point when representing Stickk.com
as providing customers with nudges. Changes in incentives are not nudges; they do not
work with biases, rather they expect people to respond rationally to changes in incentives.
Another example of a mistaken nudge is the Toxic Release Inventory, which discloses
information to the public on how much pollution companies release into the environ-
ment. The availability of this information motivates companies through the threat of bad
publicity to decrease pollution, improve environmentally responsible behavior, and
remain in conformity with safety, health and clean-up standards (Thaler and Sunstein
2008, 190). Thaler and Sunstein claim that such a nudge could be expanded to include a
Greenhouse Inventory that would track changes in greenhouse gas emissions over time
(Thaler and Sunstein 2008, 191). Unfortunately, ‘‘environmental blacklists’’ like the
Toxic Release Inventory do not nudge because they really serve to increase the costs of
polluting (Hausman and Welch 2010, 124). Such a Greenhouse Inventory would change
financial incentives as opposed to working with the unconscious biases that corporate
executives or other key players are subject to.
A final example of a mistaken nudge is a suggestion made by Jean King, the Cancer
Research UK’s Director of Tobacco Control. She claims that cigarettes should be ‘‘less
alluring for children to smoke, and suggests that this can be done by disallowing ‘‘brightly
lit’’ displays to appear next to ‘‘sweets and crisps.’’4 Though King’s suggestion draws on
psychological insights, it is still a prohibition. It does not nudge tobacco companies to stop
targeting children nor does it nudge children to feel less drawn to smoking. Prohibitions,
policies, and programs that draw on psychology and behavioral economics are not nudges
if they end up limiting people’s choices. Strategies to modify people’s behavior that use
psychology and behavior economics are not necessarily nudges. Nudges use the results of
these social sciences, but they do so in ways that work with biases and preserve choice and
incentives.
Citizens thus should be aware that an initiative is not guaranteed to be a genuine
nudge simply because some refer to it in this way. This gap between designation and
reality can be important to be aware of insofar as some may want to support nudge poli-
cies if they are nudges and not prohibitions or changes in financial incentives.

Fuzzy nudges
Sometimes it is not clear whether an intervention qualifies as a nudge. For example,
Daniel Hausman and Brynn Welch argue that Thaler and Sunstein mistakenly use cases
of ‘‘giving advice’’, ‘‘rational persuasion’’, and rendering ‘‘information salient’’ as nudges.
Hausman and Welch are struck by the example of the Ambient Orb, a technology that
provides feedback on energy consumption (Thaler and Sunstein, 196). The orb glows red
when homeowners use lots of energy and green when they are using much less. Thaler
and Sunstein claim the orb to be so persuasive that ‘‘in a period of weeks, users of the
Orb reduced their use of energy, in peak periods, by 40 percent’’ (2008, 196).
Hausman and Welch contend the ambient orb is not a nudge because it simply makes
users aware of information to remind them to think carefully about energy use. The
ambient orb is not any different from educational campaigns, warning labels on cigarettes,
requirements that firms notify employees of hazards, and signs reminding people to drink
more water on a hot day (2010, 127–8).

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
928 Practice and Ethics of Choice Architecture

This position on what counts as a nudge is worth exploring. A nudge does not try to
inform the automatic system, but work with the influence biases inevitably have. Argu-
ably, the ambient orb could be a nudge because it changes the atmosphere of the choice
architecture to work with people’s orienting moods (automatic thinking) in ways that
reduce their energy use. Its affect on mood would be similar to how teachers’ grading in
red ink may negatively affect their students’ confidence. A recent Tufts psychology study
takes the issue further, showing that in addition to red influencing student perception,
‘‘graders using red tended to grade more harshly when presented with both objective and
subjective assignments’’ because ‘‘red ink encourages harsh grading.’’5 Perhaps the ambi-
ent orbs’ affect on people is of this sort. It is not informational, but creates a mood con-
ducive to energy conservation.
Does the ambient orb nudge energy users or does it just provide them with informa-
tion? We think the answer is debatable, which seems to demonstrate that there are some
borderline cases that we will refer to as fuzzy nudges. Fuzzy nudges may create different
concerns depending on whether we think they are genuine nudges. For example, if we
think the ambient orb just provides information, then there may be few concerns beyond
whether we agree to its accuracy or standards of measurement. If the ambient orb nudges,
then one might have concerns that relying on it does not teach users about energy sav-
ings and promotes ignorance. These concerns are more ethical in nature, and are worth
exploring in detail. We do so next.

Ethical anxieties and criticisms of nudges


Important ethical criticisms of nudging exist. Conservative pundit Glenn Beck outright
depicts Sunstein as ‘‘the most dangerous man in America’’.6 Describing a range of com-
plaints, Patrick Basham notes:
The Wall Street Journal proclaimed the onset of the ‘nudge state’… Aditya Chakrabortty says
Thaler and Sunstein’s ‘vision could be described as the au pair state: a more informal, less
heavy-handed but still ever so slightly intrusive creature;’ meanwhile, the US Center for Con-
sumer Freedom finds that, ‘While Sunstein couches his plans as a ‘nudge’, we’d say it’s more
like a shove’ (2008).
Criticisms like these include vivid appeals to dystopian images of ‘‘Big Brother’’ and
‘‘Orwellian nightmares’’ (O’Neil 2011). They sometimes suggest that the very idea of
using nudges is patronizing since it rests on the assumption that the masses are too stupid
to make good decisions for themselves. Hausman and Welch (2010) note that anxieties of
this kind are exaggerated because choice architecture is unavoidable and cannot be mor-
ally problematic in itself. Indeed, people’s decisions can be influenced by the failure to
intentionally design choice architecture – sometimes for the worse. Nevertheless, since
choice architecture tends to work best when people are unaware that a nudge is influenc-
ing their behavior, they suggest Thaler and Sunstein oversell the choice-preserving and
non-intrusive credentials of nudges. Would someone who values their freedom to choose
be okay with the idea that their behavior is being modified in ways they are not aware
of? Though there is a sense in which those being nudged have the same set of choices
available to them, perhaps it is not one that is acceptable to those who worry in particular
about their degree of freedom in society.7
Luc Bovens (2009) claims that nudges may not be desirable because they can leave us
with ‘‘fragmented selves’’. We can act as fragmented selves when behaving one way
while under a nudge’s influence, but another when making decisions that are not

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 929

nudged. For example, we might eat healthy when selecting foods from a cafeteria that
choice architects design to steer our behavior by placing fruits and vegetables in front of
the display. Yet we might adopt bad eating habits when selecting meals and snacks in
other choice architectures. The issue of fragmentation concerns the development of moral
character. If not challenged to learn to make good choices in most contexts, nudged indi-
viduals will expect other members of society to take responsibility for nudging them away
from anything that is bad for them. Morally lazy, fragmented selves are quick to have
others take responsibility for their welfare. They do not view the capacity to assume per-
sonal responsibility for their choices as a worthy goal.
People who do not value personal responsibility may accept forms of government that
are not accountable to citizens and in which most decisions about how to modify citi-
zens’ behavior are made behind closed doors without any public input, consent, or
engagement. Frank Furedi (2011) identifies three problems that would arise if nudges
were to underwrite a society of this kind: practical wisdom would be diminished because
knowledge of how to behave would be ‘‘outsourced’’ to experts; the public sphere would
be ‘‘devalued’’, as significant matters of personal conduct would be given over to the
‘‘behavior management industry’’; public life would be ‘‘emptied out’’, as crucial ‘‘cultural,
moral and political questions’’ would become reductively translated into the narrow terms
of behavioral economics, resulting in hype leading to failed programs.
Rizzo and Whitman (2008) offer a criticism that complements Furedi’s political
concerns. They identify an array of slippery-slope issues in which, over time, a soft
intervention on people’s behavior creates conditions that lead them to accept more
external control over their lives. Whitman (2010) summarizes this vision when he
writes that
The story begins with the seemingly innocuous proposal to enroll all employees in savings plans
automatically (with the ability to opt out). Then it progresses to new default rules in contracts,
such as a presumption of ‘for cause’ rather than ‘at will’ employment, again with an opt-out.
And then? Default rules that can be waived only through a cumbersome legal procedure. Then
default rules with some options ruled out entirely – such as maximum hours that cannot be
waived for less than time-and-a-half pay. Then cooling-off periods for high-cost purchases.
Then sin taxes for fatty or sodium-rich foods. Then outright bans on ingredients like trans fats.
The main point, then, is that the more we become habituated to being nudged the less
we may be bothered by the incremental introduction of more controlling tactics.
Rizzo and Whitman (2008) also challenge the privilege of choice architects in selecting
what values and preferences are promoted by the nudges they design. Choice architects
can project the values and preferences of their conceptions of ideal decision-makers onto
those who are nudged. There is no guarantee that these projections are in line with what
users actually prefer to choose (See also Qizilbash 2009 for more on this problem in
general). We will discuss an example of this problem in the next section.
Finally, Thomas Nagel (2011) acknowledges that our automatic system of thinking is
valuable and routinely performs a necessary role. He argues that anyone who seeks to
modify others’ behavior must satisfactorily determine which biases should be worked with
and which biases should be challenged explicitly. Nagel uses the example of racism.
While racism is bad, its origins may lie in a natural inclination to feel ‘‘greater sympathy’’
for others who resemble ourselves. Nagel thus writes:
How do we know that it would be better to counter the effects of this bias rather than to
respect it as a legitimate form of loyalty? The most plausible ground is the conscious and

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
930 Practice and Ethics of Choice Architecture

rational one that race is irrelevant to the badness of someone’s suffering, so these differential
feelings, however natural, are a poor guide to how we should treat people.
Nagel’s point is that just because a bias can be nudged does not mean it should be nudged.
Though racism may stem from loyalty biases, the correct way to address it is not through
a nudge, but through laws and policies that prohibit racism explicitly because it is a bad
way of expressing loyalty. Neither the concept of nudging nor the scientific studies that
choice architects may draw on to understand biases, include procedures for determining
which biases individuals and societies should nudge, maintain, prohibit, challenge, or
abandon.
Though the social anxieties associated with nudges may be overblown, the ethical criti-
cisms should be taken seriously by Thaler and Sunstein along with choice architects who
see potential in nudging. Will nudges create the kind of society that we should want to
live in (Bovens; Rizzo and Whitman; Whitman; Furedi)? Can the concept of nudging
tell us what values and preferences to promote and which biases should be nudged (Rizzo
and Whitman; Nagel)? In the next section, we add to the set of challenges facing choice
architects by showing how understanding biases does not shed enough light on how
people perceive and interpret meaning in situations where they have to make choices.

Nudges and semantic variance


Even if those who advocate nudges respond to the criticisms just described, there is still a
long way to go before we ever arrive at a point where choice architects can have a reli-
able method for offering nudges. Nudges can be offered by policy professionals or simply
anyone in the position to improve the greater good, such as teachers and parents. Yet
Thaler and Sunstein do not provide anything like a formula, technique, or method for
creating and implementing successful nudges that could be used reliably by any of these
potential choice architects. Instead, they simply inform us that in order to conceive of a
possible nudge one must be aware of which bias people are subject to in a given situation
and how such a bias will influence decision-making. While literature exists on the sorts
of general decision-making situations and cognitive biases for which nudges are needed,
most real world choice architects need to make substantial interpretations about how the
introduction of nudges will impact the ways in which people perceive the meaningfulness
of their decision situations. This gap in knowledge is captured by the following question.
How can choice architects know if a particular group of users in a particular set of
circumstances will respond to the changes in the meaning of the choice architecture?
Let us consider two examples in which the issue of meaning is salient. Nudges that
work in one cultural milieu may not work in another due to differences in cultural val-
ues, histories, and symbolic forms of understanding. For example, Thaler and Sunstein
describe how decorative flies in the men’s urinals in Schiphol airport are an effective
nudge for reducing spillage on the floor. However, it remains an open question as to
which types of signs will prove effective for this end in different cultures (Bovens 2010,
483–4). It is easy to imagine, even if only as a thought experiment, that a culture exhibit-
ing reverence for all life would be deeply offended by the sight of flies in the bathroom
(Selinger and Whyte 2010).
Trevor Pinch presents another example.
Sound warnings are known to be more effective than visual warnings. BMW, in its high-end
German models, introduced a verbal warning of excess speed in the form of a woman’s voice.

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 931

It turned out to be totally ineffective as German male drivers paid little attention to a female
voice telling them to slow down! A male voice was found to be much more effective (489).
This nudge promotes sexism in Germany, and it might reinforce sexism in other cultures,
too. The point, here, is not merely that well-intentioned nudges can perpetuate harms
through unintended consequences. Examples like this serve as reminders à la Rizzo and
Whitman that choice architects may project their values on others. In this case, choice
architects’ having different life experiences from those they nudge may prevent them
from foreseeing how certain nudges may be problematic.
To understand the importance of meaning to predictions about nudging behavior, it is
important to distinguish between two contexts: ‘semantic variance’ and ‘semantic invari-
ance’. Semantic variance refers to situations where perceptions of meaning can vary dra-
matically when there are changes to the decision context, like a nudge. These situations
can be difficult for choice architects to predict because the social sciences underlying
nudges do not aim to clarify how people’s perceptions of meaning change when the
contexts in which they make decisions change.
Semantic invariance, then, refers to situations where perceptions of meaning are not
likely to vary when there are changes to the decision context. In the case of painting
stripes over Lake Shore Drive, the nudge was offered in the decision context and the
only effect was the desired one, i.e. on average people slowed down and accidents were
reduced. Though we know that there are semantically invariant situations in which
nudges would be effective, there is the problem of how we can know that a situation for
which a choice architect wishes to offer a nudge is semantically invariant.
Consider the ‘‘Broken Windows Theory’’ of crime. Philip Zimbardo describes it as
the idea that the appearance of public disorder is a ‘‘situational stimulus to crime, along
with the presence of criminals’’ (2007, 25). He further states that:
Anything that cloaks people in anonymity reduces their sense of personal accountability and
civic responsibility for their actions. We see this in any institutional settings, such as our schools
and jobs, the military and prisons. Broken Windows advocates argue that alleviating physical
disorder – removing abandoned cars from the streets, wiping out graffiti, and fixing broken
windows – can reduce crime and disarray in the city streets (2007, 25).
Far from suggesting that Broken Windows Theory can be applied universally, Zimbardo
emphasizes the importance of meaning. Through experimentation he found that a broken
car abandoned in the Bronx (New York) evokes different responses than one abandoned
in Palo Alto (California) because each area’s setting expresses a different sense of what
community means and what communal responsibility entails. Without doing localized
sociological and psychological studies it is impossible to ascertain what meaning a particu-
lar group attributes to either the former or the latter. None of the methods detailed by
Thaler and Sunstein are explicitly useful in this regard.
While many questions related to meaning remain unanswered in the nudge literature,
Bovens offers an excellent taxonomy of the different basic ways that ‘‘nudges can be sub-
ject to semantic variance’’ (2010, 483). Within this spectrum, he draws an important gen-
eral distinction between ‘‘the workings of nudging’’ and ‘‘the aims for which nudges are
enlisted’’ (2010, 485). The aims can display cultural variance for the simple reason that
cultures can have different needs and can be oriented around different values. For exam-
ple, Western economies have a greater need for enhanced personal retirement savings
than those in the East. According to Bovens, this is because in the former, the ‘‘savings

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
932 Practice and Ethics of Choice Architecture

rates are alarmingly low’’, whereas in the latter they are ‘‘too high’’ and ‘‘may stand in
the way of economic development’’ (2010, 485).
To get a sense of how the workings of nudging can exhibit cultural variance, we only
need to consider Bovens’s illustrative examples of behavioral and perceptual biases. The
former types of bias influence what people think and how they act. The latter type of bias
influences what people see, and therein has the potential to influence what they think and
how they act. Choice architects can find it difficult to make accurate behavioral predic-
tions or create nudges that elicit uniform behavioral responses when either bias is present.
When discussing behavioral bias, Bovens writes,
We see this kind of bias at work when countries in Europe that were occupied by Nazi Ger-
many refrained from introducing daylight savings time until the oil crisis in the 1970s because it
was a policy introduced by the Nazis (Bovens 2010, 484).
In this case, the bias is ‘‘to avoid patterns of agency displayed by people with whom one
does not wish to be associated’’ (484). In some cases, the converse holds true, i.e. people
wish to ‘‘emulate patterns of agency displayed by people with whom one does wish to
be associated’’ (484). While many perceptual biases (e.g. Shepard’s table tops that appear
to be different sizes despite their homogeneity) are universal and rooted in common fea-
tures of embodiment and cognition, Bovens notes that cultural variability can still play a
determinative role in acts of perceiving basic geometric relations. For example, ‘‘cultural
variance was found in many of the standard visual illusions (e.g. the Müller-Lyer illusion)
with non-Western cultures being less susceptible’’ (Bovens 2010, 485). Zimbardo’s
abandoned car case could qualify as an instance of perceptual bias. We simply would need
to treat the concept as broad enough to cover instances where culturally specific values
influence what people perceive and the judgments they arrive at concerning those
perceptions.
The gap in knowledge related to semantic variance might be insignificant, were its
effects divorced from potential harm. Unfortunately, this is not the case. In some of the
previous examples, those being nudged may be insulted or have their sexist biases rein-
forced. Other examples can be considered as well. Take the case of GPS systems, which
have been designed to be useful to drivers in many ways, including offering tools to
counter their tendency to drive too fast. In one particular TomTom model, once one
starts breaking the speed limit, a notification comes on the screen that is highlighted in
red, a color that evokes stop signs and the stop signal of a traffic light. However, one of
the authors of this paper experiences the TomTom as having precisely the opposite effect!
When he notices that he is speeding, he also notices that the information listing how
much time remains before the trip is completed becomes shortened, which changes the
meaning of speeding in his perception and experience. This actually prompts him to try
to reduce the trip time by visually measurable increments. In this case, the choice archi-
tect - who is using a fuzzy nudge like the Ambient Orb and red inked exam corrections
– mistakenly assumed semantic invariance to hold when, as the author’s contrary experi-
ence attests, it is possible to interpret the situation in different ways – ways that seem like
natural responses, given the circumstances.

Conclusion
Thaler and Sunstein’s nudge theory tries to solve important problems in smart, inexpen-
sive ways. Nevertheless, we have covered several pointed criticisms that ought to be

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 933

addressed more in the future when the viability of nudges is discussed. A February 2011
story in The Guardian reports that one UK minister described nudges as experimental and
lacking in concrete evidence, but still worth exploring through the £520,000 it costs to
staff the ‘Nudge Unit’, especially given that the low risks of failure. This is an interesting
perspective, as a National Audit Office report on regulation revealed that to date no
executive level department had adopted any nudge inspired ideas.8 Likewise, a recent
article in the British Medical Journal expressed concern that
to date, few nudging interventions have been evaluated for their effectiveness in changing
behaviour in general populations and none, to our knowledge, has been evaluated for its ability
to achieve sustained change of the kind needed to improve health in the long term (Bonell
2011, d401).
We have outlined the key conceptual, practical, and ethical issues that advocates of
nudges must respond to adequately should nudging be recognized as a promising mecha-
nism for improving our lives and societies. Whether a theory of nudges can overcome
these challenges, or whether nudge practice is inherently fraught with serious drawbacks,
is a matter for future scholarship and practice to determine.

Acknowledgement
Our thanks go to Pelle Guldborg Hansen for allowing us to re-print and modify some of
the material found in Selinger 2011.

Short Biographies
Evan Selinger is an Associate Professor of Philosophy and Graduate Program Faculty
Member in the Golisano Institute for Sustainability, both at Rochester Institute of Tech-
nology. He is also Editor of the Springer journal ‘Philosophy and Technology’ and book
Series ‘Philosophy of Engineering and Technology’. Evan has published extensively in
the areas of philosophy of technology, ethics and policy of science and technology, phe-
nomenology, and applied ethics. His monograph Expertise: Philosophical Reflections was
published by Automatic ⁄ VIP Press in 2011, and his latest co-edited books are 5 Questions:
Sustainability Ethics (Automatic ⁄ VIP Press 2010), Rethinking Theories and Practices of Imaging
(Palgrave McMillan 2009), and New Waves in Philosophy of Technology (Palgrave McMillan
2009). Currently, Evan is prioritizing the National Science Foundation funded project,
‘An Experiential Pedagogy for Sustainability Ethics’. Along with Braden Allenby (Lincoln
Professor of Engineering and Ethics at Arizona State University), and Tom Seager (Asso-
ciate Professor in the School of Sustainable Engineering and the Built Environment at
Arizona State University), he is developing a set of innovative pedagogical recommenda-
tions and simulations that introduce students to moral dilemmas in sustainability that are
engendered by non-cooperative, game-theoretic conflicts.
Kyle Powys Whyte is an Assistant Professor of Philosophy at Michigan State University
and an affiliated faculty at the Center for the Study of Standards in Society (CS3), the Peace
and Justice Studies Specialization, the Environmental Science and Policy Program, and the
American Indian Studies Program. He is an enrolled member of the Citizen Potawatomi
Nation in Shawnee, Oklahoma. Dr. Whyte writes on issues in environmental justice, the
philosophies of science and technology, and American Indian philosophy. His articles are
published in journals such as Synthese; Global Ethics; Agricultural & Environmental Ethics;
Knowledge, Technology & Policy; Ethics, Place & Environment; Continental Philosophy Review;

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
934 Practice and Ethics of Choice Architecture

Environmental Philosophy; Philosophy & Technology; and Rural Social Sciences, and his research
has been funded by the National Science Foundation, Spencer Foundation, and U.S. Fish
and Wildlife Service. He is a member of the American Philosophical Association Commit-
tee on Public Philosophy. His PhD in Philosophy is from Stony Brook University.

Notes
* Correspondence address: Evan Selinger, Department of Philosophy, RIT Graduate Program Faculty, Golisano
Institute for Sustainability Rochester Institute of Technology, 92 Lomb Memorial Drive, Rochester, NY 14623,
USA. E-mail: evan.selinger@rit.edu

1
For more on this example, see: http://tinyurl.com/2co8dln and http://tonecheck.com.
2
The ideas in this paragraph are referred to as libertarian paternalism. For a good introduction to it, see Thaler
and Sunstein 2003a,b.
3
For the full story, see: http://money.usnews.com/money/blogs/alpha-consumer/2010/12/08/how-to-make-sure-
your-2011-resolutions-stick.
4
For the full story, see: http://www.libdemvoice.org/the-independent-view-tackling-tobacco-a-new-years-resolu-
tion-for-the-government-22506.html.
5
For more on this study, see: http://www.tuftsdaily.com/features/teachers-who-use-red-ink-are-likely-to-grade-
students-work-more-harshly-study-finds-1.2380695.
6
See http://www.glennbeck.com/content/articles/article/198/39348/.
7
Thaler and Sunstein may not be as libertarian as they claim. See note 2.
8
See http://www.guardian.co.uk/politics/2011/feb/20/nudge-unit-oliver-letwin.

References
Basham, P. 2010. ‘Are Nudging and Shoving Good for Public Health?’ A Democracy Institute Report. [Online].
Retrieved on 5 July 2011 from: http://tinyurl.com/4m6j6m9.
Bonell, C. et al. 2011. ‘One Nudge Forward, Two Steps Back.’ British Medical Journal 342: d401.
Bovens, L. 2009. ‘The Ethics of Nudge.’ Pp. 207–20 in Preference Change, edited by T. Yanoff-Grüne and S. O.
Hansson. Dordrecht: Springer.
Bovens, L. 2010. ‘Nudges and Cultural Variance: A Note on Selinger and Whyte.’ Knowledge, Technology & Policy
23(3): 483–6.
Burgess, A. ‘The Trouble with Nudge.’ 2010. [Online]. Retrieved on 5 July 2011 from: http://kent.academia.edu/
AdamBurgess/Blog/4521/The-Trouble-with-Nudge.
Furedi, F. 2011. ‘Defending Moral Autonomy Against an Army of Nudgers.’ Spiked. [Online]. Retrieved on 5 July
2011 from: http://tinyurl.com/6kfafka.
Beck, Glenn. 2010. Cass Sunstein – infiltrate!. [Online]. Retrieved on 5 July 2011 from: http://www.glennbeck.
com/content/articles/article/198/39348/.
Hausman, D. M. and B. Welch. 2010. ‘Debate: To Nudge or Not to Nudge.’ Journal of Political Philosophy 18(1):
123–36.
John, P., G. Smith and G. Stoker. 2009. ‘Nudge Nudge, Think Think: Two Strategies for Changing Civic Behav-
iour.’ The Political Quarterly 80: 361–70.
Lobel, O. and O. Amir. 2009. ‘Stumble, Predict, Nudge: How Behavioral Economics Informs Law and Policy.’
Columbia Law Review 108: 2098–138.
Ménard, J. 2010. ‘A ‘Nudge’ for Public Health Ethics: Libertarian Paternalism as a Framework for Ethical Analysis
of Public Health Interventions?’ Public Health Ethics 3(3): 229–38.
Nagel, T. 2011. ‘‘David Brooks’ Theory of Human Nature.’ New York Times. [Online]. Retrieved on 5 July 2011
from: http://www.nytimes.com/2011/03/13/books/review/book-review-the-social-animal-by-david-brooks.html.
Nolan, J., W. Schultz, R. Cialdini, N. Goldstein and V. Griskevicius. 2008. ‘Normative Social Influence Is Under-
detected.’ Personality and Social Psychology Bulletin 34: 913–23.
O’Neil, B. 2011. ‘Nick Clegg’s sinister nannies are ‘nudging’ us towards an Orwellian nightmare.’ The Telegraph.
[Online]. Retrieved on 5 July 2011 from: http://tgr.ph/g7gLsp.
Pinch, T. 2010. ‘Comment on ‘‘Nudges and Cultural Variance’’.’ Knowledge, Technology & Policy 23(3): 487–90.
Qizilbash, M. 2009. ‘Well-Being, Preference Formation and the Danger of Paternalism.’ Papers on Economics and
Evolution, Max Planck Institute of Economics, Evolutionary Economics Group 18: 1–30.
Rizzo, M. and D. Whitman. 2008. ‘Little Brother is Watching You: New Paternalism on the Slippery Slopes.’
SSRN eLibrary. [Online]. Retrieved on 5 July 2011 from: http://tinyurl.com/6879ruy.

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x
Practice and Ethics of Choice Architecture 935

Selinger, E. 2011. ‘Concerns Over Nudging.’ Initiative for Science, Society, and Policy Essay Series Vol. 2. [Online].
Retrieved on 5 July 2011 from: http://www.science-society-policy.org/news/issp-news/concerns-about-nudges/.
Selinger, E. and K.P. Whyte. 2010. ‘Competence and Trust in Choice Architecture.’ Knowledge, Technology & Policy
23(3–4): 461–82.
Stoker, G., and A. Moseley. 2010. ‘Motivation, Behaviour and the Microfoundations of Public Services.’ 2020 Pub-
lic Services Commission. [Online]. Retrieved on 5 July 2011 from: http://www.civicbehaviour.org.uk/documents/
2020ESRCFINALSTOKERandMOSELEY.pdf.
Thaler, R. and C. Sunstein. 2003a. ‘Libertarian Paternalism.’ The American Economic Review 93(2): 175–9.
Thaler, R. and C. Sunstein. 2003b. ‘Libertarian Paternalism is Not an Oxymoron.’ University of Chicago Law Review
70(4): 1159–202.
Thaler, R. and C. Sunstein. 2008. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale
University Press.
Whitman, R. 2010. ‘The Rise of the New Paternalism.’ Cato Unbound. [Online]. Retrieved on 5 July 2011 from:
http://www.catounbound.org/2010/04/05/glen-whitman/the-rise-of-the-new-paternalism/.
Zimbardo, P. 2007. The Lucifer Effect: Understanding How Good People Turn Evil. New York, NY: Random House.

ª 2011 Blackwell Publishing Ltd Sociology Compass 5/10 (2011): 923–935, 10.1111/j.1751-9020.2011.00413.x

You might also like