Professional Documents
Culture Documents
People say that they value their privacy. However, when they are offered the opportunity
to trade private, personal information for immediate but minor benefits such as access to a
website, they routinely do so. Young people appear to value privacy even less, constantly
uploading revealing material about themselves and others to widely-accessible social media sites
like Facebook. This so-called “privacy paradox” is frequently interpreted to prove that people do
not, in fact, care about privacy so much after all. However, sociological research shows not only
that people – including young people – assign importance to privacy, but also that they routinely
engage in complex privacy-protective behaviors (see, e.g., the discussion and references in
Marwick & boyd, 2014). If this is the case, then we confront a different privacy paradox. First,
how is it that the socially standard regimes of privacy protection featured in “notice and consent”
policies and other examples of “privacy self-management” so completely fail to represent how
individuals actually conceptualize and attempt to manage their privacy? Second, given the
totality of their failure, why are they so persistently taken to present a normatively adequate
understanding of privacy?
In what follows, I will use the work of Michel Foucault, in particular his later work on
ethics and subjectivity, to argue that privacy self-management regimes need to be understood as
what I will call a “successful failure.”1 That is, their failure to protect privacy tells only half the
1
Foucault’s work occupies a strange place in the study of privacy. On the one hand, his discussion of
panopticism in Discipline and Punish (1977) arguably provided the organizing metaphor for an entire literature
around surveillance, even if there has been a move over the last decade to Deleuze (see, e.g., (Haggerty, 2006;
Haggerty & Ericson, 2000; Lyon, 2006). On the other hand, if one considers “privacy” as an ethical norm or legal
right, Foucault is nearly entirely absent from the discussion (notable exceptions are (Boyle, 1997; Cohen, 2013;
Reiman, 1995)). This is no doubt due, in part, to the general treatment of privacy as a question of information
disclosure by individuals (which makes the more sociological analysis of panopticism seem less immediately
story. The other half of the story is their success in establishing a very specific model of ethical
subjectivity. Specifically, I will argue that the current reliance on privacy self-management,
epitomized by notice and consent regimes, not only completely fails to protect privacy, but that it
does so in a way that encourages adherence to several core neoliberal techniques of power: the
belief that privacy can only be treated in terms of individual economic choices to disclose
information; the occlusion of the fact that these choices are demonstrably impossible to make in
the manner imagined; and the occlusion of the ways that privacy has social value outside
whatever benefits or losses may accrue to individuals. In Foucauldian terms, that is, privacy
that subjectivity and ethical behavior are matters primarily of individual risk management
The paper proceeds as follows. In the first part, I offer a synthetic presentation of some
of the problems with privacy self-management as a model of privacy. The goal is to offer an
immanent critique in the sense that the problems with privacy self-management are entirely
predictable using the economic tools that are also used to defend the theory. In the second, I
relevant), and to the subsequent adoption of a theory of decisional autonomy that Foucault rejected. For both of
these, see the complaint in (Cohen, 2012a). What is lost in the reduction of Foucault to panopticism and privacy to
formal autonomy is Foucault’s post Discipline and Punish work on techniques of governance, biopolitics, and the
formation of ethical subjects. As a whole, this body of work treats the ways that various social practices contribute
to a process of “subjection” (or “subjectification”), and, in so doing, how they help to make us who we are. There is
considerable discussion about the compatibility of these phases of Foucault’s work with each other. In particular,
many Foucault scholars (e.g., McNay, 2009) think – generally to their frustration – that the later work on ethics is
incompatible with, or at least on a completely different footing from, the work on biopolitics and governmentality .
Foucault himself famously denied this charge (“it is not power, but the subject, which is the general theme of my
research” for the past twenty years (1982, pp. 208-209). For a supportive assessment of Foucault on the point, see
(Lazzarato, 2000). It is also possible that Foucault’s understanding of biopower changes between his introduction of
the subject in 1976 and his later usage of it (for exemplary treatments, see Collier, 2009; Protevi, 2010). I will not
attempt to resolve either debate here; for the sake of this paper, I will assert but not defend the view that the ethical
“techniques of the self” open a path for studying the ways that biopolitics secures its own operation in the individual
persons who use these techniques. Foucault’s studies of ancient Greek strategies for subjection, then, can be
understood as models for studying the techniques in current society. Prima facie plausibility of this view comes
from (a) Foucault’s own assertions (see above); (b) the extent to which the disciplinary techniques featured in
Discipline and Punish are precisely about convincing individuals how they should view themselves as subjects of
power; and (c) Foucault’s emphasis in his discussion of American neoliberalism (Foucault, 2008) on the attempt to
reconfigure subjectivity along entrepreneurial lines. For discussion of this last point, see, e.g., (Hamann, 2009).
outline the social benefits of privacy, and suggest how notice and consent not only occludes, but
actively subverts those benefits. The third section places the first two in the context of
Foucault’s late work on governmentality and ethics, to show how notice and consent can fail to
protect privacy, but succeed spectacularly as a strategy of subjection. Throughout, I will use
social networking software (and Facebook specifically) as a lead example of the interaction
In the U.S., the standard strategy for protecting privacy online is self-management,
usually by way of “notice and consent” to a statement of terms of service or an End User License
Agreement (EULA).2 These contracts stipulate that users of the program or service consent to
certain uses of personal information that they make available about themselves. The theoretical
individuals behave rationally (in the economic sense) when they express privacy preferences and
on the idea that their behavior adequately reveals those preferences. Unfortunately, neither
2
I draw the term “privacy self-management” from (Solove, 2013). Privacy self-management is essentially
the current version of privacy as “control” over personal information, as found in earlier sources such as Westin and
(Fried, 1968). Early version of the theory included significant attention to sociological literature that described
privacy in terms of social group formation. For the ways that this attention diminished, especially in the
construction of the argument in Westin, see (Steeves, 2009). Complaints about the ubiquity of privacy self-
management and its failures are common; in addition to Solove, see, e.g., (Cohen, 2012a, 2013; Nissenbaum, 2010).
The situation in Europe is, at least on its face, quite different: the EU Data Protection Directive encodes a
number of substantive privacy norms, which result in sector-specific differences from U.S. law (for a comparative
discussion of healthcare, for example, see (Francis, 2014-2015)). The Directive is also currently undergoing an
upgrade designed to update and strengthen the Directive into a Regulation (General Data Protection Regulation,
GDPR). There is considerable skepticism, however, as to whether this process will succeed. Bert-Japp Koops
(2014), for example – citing some of the same sources discussed here, such as Solove (2013) – argues that the
proposed regulation relies too much on consent and its underlying value of autonomy. He also notes that the GDPR
in its current version is extraordinarily complex, which reduces the likelihood that it will achieve effective
protection, especially insofar as the complexity becomes a barrier to companies viewing privacy as a social value,
rather than a compliance-based hoop (a similar complaint is made in (Blume, 2014)). Thus, although considerations
of space preclude the extension of the discussion into the details of EU law, it seems plausible that at some of the
same problems are present there as well.
assumption is true. There are numerous good critiques of the self-management regime; my
effort here is to offer something brief and synthetic. Here, are three types of reasons why self-
management can be expected to underprotect privacy: (1) users do not and cannot know what
they are consenting to; (2) privacy preferences are very difficult to effectuate; and (3) declining
has to know what she is consenting to. Unfortunately, users are on the wrong end of a
substantial information asymmetry, and there is there is good evidence that consumers neither
know nor understand the uses to which their data can be put (McDonald & Cranor, 2010). No
doubt this is in part because privacy statements are notoriously difficult to read, and there is a
substantial literature on the complexity of privacy statements and possible ways to make them
more accessible. Nonetheless, there are real limits to that sort of reform for at least three
reasons. First, websites have every incentive to keep privacy policies as vague as possible, and
to reserve the right to unilaterally modify them at will according to business needs. This is
particularly true of sites like Facebook, where the “product is access to individuals who have
entered personal information” (Hoofnagle & Whittington, 2014, p. 628; for a list of what
Facebook allows third parties to do, see 630-631). Second, sites freely sell information to third
parties, subjecting users to the third party’s privacy policy, which of course those users never
3
Although I will not pursue the point in detail, these problems are also problems for technical solutions that
depend on predelegating privacy decisions to some sort of data vault or other software agent that encodes users’
data and their privacy preferences and then attempts to automatically negotiate with websites and other service
providers. Assuming that websites would comply with such an approach on the part of users – and this seems like
an assumption that needs independent justification since, as I will note, companies like Facebook clearly make it
difficult to protect one’s privacy on purpose – software agents could at most help with the difficulty in effectuating
privacy preferences. The information asymmetries and uncertainty surrounding privacy decisions cannot be relieved
by having a software agent, and setting the agent to refuse to disclose information may still carry unacceptable costs
for users. Further, the use of agents only reinforces the idea that privacy is an alienable market good, entrenching
the consent mindset.
see. Finally, as I will discuss in the next section, detailed and clear privacy policies themselves
impose a burden.
Indeed, even if privacy statements could be made perfectly lucid, substantial information
Despite lengthy and growing terms of service and privacy, consumers enter into
trade with online firms with practically no information meaningful enough to
provide the consumer with either ex ante or ex post bargaining power. In contrast,
the firm is aware of its cost structure, technically savvy, often motivated by the
high-powered incentives of stock values, and adept at structuring the deal so that
more financially valuable assets are procured from consumers than consumers
would prefer (Hoofnagle & Whittington, 2014, pp. 640-641).
That is, the problem of information asymmetry is structural, and cannot be remedied by
more fundamental problem, which is that data mining conspires to make consent meaningless
because the uses to which data will be put are not knowable to the user – or perhaps even the
company – at the time of consent.4 Some of these uses will be beneficial, no doubt, but they
could also be harmful. Users are in no position to assess either the likelihood or nature of these
future uses; in economic terms, this means that these decisions are made in conditions of
uncertainty, and are accordingly not easily amenable to risk assessment (Grossklags & Acquisti,
2007). Credit card companies have been innovators in such usage of data mining and other
analytics, and can serve as an initial illustration of the point. For example, Canadian Tire studied
its cardholders and discovered that buying felt pads to keep one’s furniture from scuffing the
floor predicted being a good credit risk; on the other hand, nearly half of cardholders who used
their cards at Sharxx Pool Bar in Montreal missed four payments in a year (Duhigg, 2009).
4
This is not a new concern: see, for example, (Tavani, 1998).
Another study concluded that obesity was associated with credit delinquency – up to 14 points
over the population-wide baseline for the extremely obese (and 3 points for the merely
overweight). Even after controlling for potentially confounding risk factors, obesity mattered
much more for credit delinquency than either recent marriage dissolution or disability (Guthrie &
Sokolowsky, 2012).
These sorts of analytics have made their way to Facebook, which because of its enormous
user base and wealth of personal information, presents a tremendous opportunity for data mining.
In 2014, the company triggered an enormous backlash by publishing research indicating that
reception of negatively (or positively) coded information from one’s newsfeed tended to cause
users to express themselves more negatively (or positively). In other words, the well-known
“emotional contagion” effect did not require direct, interpersonal interaction (Kramer, Guillory,
& Hancock, 2014). Most of the objections centered around the study’s method, which subtly
manipulated the newsfeed of millions of users for a period of time. On the one hand, as danah
boyd (2014) points out, some of this criticism is difficult to understand, since consumers
routinely accept Facebook’s opaque manipulation of their newsfeed. On the other hand,
however, users had no reason to expect that this particular manipulation of their newsfeed was
coming (and they certainly did not actively consent to participate in that specific study), and they
have no way of knowing whether the results of the study will benefit or harm them. In any case,
this is hardly the first time Facebook’s data has been the basis for analytics. For example, one
(in)famous early study (Jernigan & Mistree, 2009) found that the percentage of a given user’s
Facebook friends who identified as gay was strongly correlated with his own sexual orientation –
even if he did not disclose it. It was thus relatively simple to out people on Facebook.
More recently, one study (Kosinski, Stillwell, & Graepel, 2013) reported that automated
analysis of FB “Likes” predicted race (African-American or Caucasian) with 95% accuracy, and
gender with 93% accuracy. It also correctly classified Christians and Muslims (82%),
Democrats and Republicans (85%), sexual orientation (88% for males, 75% for females), and
Individual traits and attributes can be predicted to a high degree of accuracy based
on records of users Likes …. For example, the best predictors of high intelligence
include “Thunderstorms,” “The Colbert Report,” “Science,” and “Curly Fries,”
whereas low intelligence was indicated by “Sephora,” “I Love Being A Mom,”
“Harley Davidson,” and “Lady Antebellum.” Good predictors of male
homosexuality included “No H8 Campaign,” “Mac Cosmetics,” and “Wicked The
Musical,” whereas strong predictors of male heterosexuality included “Wu-Tang
Clan,” “Shaq,” and “Being Confused After Waking Up From Naps.”
Even one Like could have “non-negligible” predictive power. Although the epistemic value of
such correlations can and should be challenged (boyd & Crawford, 2012), highly predictive
For example, users who liked the “Hello Kitty” brand tended to be high on
Openness and low on “Conscientiousness,” “Agreeableness,” and “Emotional
Stability.” They were also more likely to have Democratic political views and to
be of African-American origin, predominantly Christian, and slightly below
average age.
And so it goes. It is hard to understand how disclosing one’s affection for Hello Kitty implies
consent later to disclose possible emotional instability, since that correlation was unknowable to
the user (and possibly to anyone at all) at the time of consent. Analyzing actual content may not
even be necessary; one study reports that network analysis alone (i.e., looking only at metadata)
was able to identify someone’s spouse 60% of the time, concluding that “crucial aspects of our
everyday lives may be encoded in the network structure among our friends” if we know how to
look (Backstrom & Kleinberg, 2014). Even failure can be informative, and the same study noted
that when the analysis failed to correctly identify a romantic partner, the relationship was
Users might further worry about the uses to which such newly-minted information might
be put (for a sobering discussion, see (Pasquale, 2015)). Warnings about exposure to stalking,
identity theft and the like have been a regular feature of critical discussions of social networking
for some time (see, e.g., Gross & Acquisti, 2005; Jones & Soltren, 2005), but some specific
examples can serve to underscore the disturbing possibilities of privacy harms, as well as their
unexpected nature. To return to Hello Kitty, MedBase2000 will sell you a list of “Anxiety and
Anxiety Disorder Sufferers” for $79 per 1000 names (cited in Harcourt, 2014, p. 24), and the
authors of the Likes study propose that the “relevance of marketing and product
models,” and cite as an example that insurance products could emphasize security when
marketing to the emotionally unstable but potential threats to the stable (Kosinski et al., 2013, p.
4). Perhaps more disturbingly, Zeynep Tufekci (2014) reports an internal Facebook study that
discovered that subtle ‘go and vote’ reminders slightly but significantly increased election
turnout, in amounts more than sufficient to swing a close election. If FB or another major
internet platform were to direct such messages only to voters likely to support candidates
receptive to whatever policy objectives the company favored, the temptation would be there to
Companies can also use even very simple data to enable disturbing offline behavior by
others. For example, the iPhone App GirlsAroundMe (eventually removed by Apple) scraped
Facebook profiles and FourSquare location data to let users (in the terms advertised on its
website) “see where nearby girls are checking in, and show you what they look like, and how to
get in touch.” In case users hadn’t already thought the matter through, the site helpfully adds, “in
the mood for love, or just for a one-night stand? GirlsAroundMe puts you in control.”5 Beyond
Whittington note that such “enhancing” of data is a readily available strategy online: if a user can
be induced to share some information with a site (especially name and zip code), the site can
then purchase information about her that she refused to provide (2014, pp. 631-634).
In sum, users do not and cannot plausibly be expected to know enough – neither about
the uses to which their information might be put, nor about the specific benefits and harms that
might result from those uses, nor about the likelihood that such harms might result – for consent
to be meaningful, especially if one makes the assumption that those users are following a
risk/benefit model of economic rationality. Even to make that assumption requires the tools of
behavioral economics, which can attempt to preserve the assumption of risk/benefit behavior by
providing explanation for apparent deviations from that model (for example, hyperbolic
discounting of the future, which suggests that individuals will undervalue distant harms
compared to those in the present or near future). It is, however, unclear what cognitive biases
and heuristics are in play at any given moment.6 Additionally, as I will indicate in the third
In sum, the epistemic problems faced by users are significant enough that Katherine
Strandburg suggests that the entire analogy to purchasing a good with one’s information is
misplaced; “better than the analogy to a purchase transaction would be an analogy to obtaining
free medical care in exchange for participating in a trial of a new medical treatment”
5
GirlsARoundMe.com, visited 6/2014. There is a periodic reference to “guys” in the site’s front page, but
it mainly proves its own exceptional status: the pictures are all of women.
6
For behavioral economics and privacy, see particularly the work of Allesandro Acquisti, e.g., (Acquisti,
2009; Grossklags & Acquisti, 2007).
(Strandburg, 2013, p. 151). In particular, she underscores how hard it is for users to measure the
For an Internet user to weigh the costs and benefits of a particular online activity,
the user must estimate the marginal expected disutility of the particular data
collection associated with that activity. To determine marginal disutility, an
Internet user must have information about how the incremental data collected in
association with the particular activity changes the overall availability of
information about her in the online ecosystem. Not only that, she must be able to
connect that increment in available information to an increment in expected
disutility. This is essentially an impossible task (2013, pp. 147-148).
(2) Difficult to Actualize Preferences: People may have difficulty effectuating their
privacy preferences. This problem is endemic on sites like Facebook, and it is not obviously the
fault of users: successful navigation of the site’s constantly changing privacy policies requires a
commitment to continually mastering and remastering byzantine detail and complexity. The
only seemingly consistent rule is that the software will default to openness.7 Empirical research
repeatedly demonstrates that Facebook users do not successfully effectuate their privacy
preferences, and that they often do not even know this. For example, one recent study found that
a full third of Facebook users left privacy settings at their open-sharing default, and that nearly
two thirds have actual privacy settings that do not match what they think those settings are –
almost invariably in the direction of more disclosure (Liu, Gummadi, Krishnamurthy, &
Mislove, 2011). Another, which collected data from Columbia University students, found that:
93.8% of participants revealed some information that they did not want disclosed.
Given our sample, it is virtually certain that most other Facebook users have
similar problems. On the other hand, we note that 84.6% of participants are
7
Defaults matter. Research indicates that most individuals do not change software or other defaults (as, for
example, 401(k) participation, which can be raised dramatically by simply switching from an opt-in to an opt-out
default). The reasons for this are partly economic – changing defaults (especially on Facebook) takes time and
effort, and partly normalizing: the default setting communicates what an “average” or “reasonable” user ought to
prefer. See (Shah & Kesan, 2007)
hiding information that they wish to share. In other words, the user interface
design is working against the very purpose of online social networking. Between
these two figures, every single participant had at least one sharing violation.
Either result alone would justify changes; taken together, the case is quite
compelling (Madejski, Johnson, & Bellovin, 2011, p. 11).
In other words, FB’s interface design consistently frustrates users’ ability to achieve their privacy
objectives.8
Conversely, too many opportunities for consent can themselves make it more difficult for
users to effectuate their privacy preferences.9 One study concluded that if all web users were
simply to read (once) the privacy policies of the sites they visit, they would each spend about 244
hours a year on the task, more than six work-weeks. That diversion would cost the U.S.
economy something on the order of $781 billion annually in lost time, and the calculation did not
include any time for comparison shopping among privacy policies, a requirement for any claim
that markets could develop to satisfy user privacy preferences (McDonald & Cranor, 2008). Sites
also appear to actively exploit this time commitment to make it more difficult to opt-out of
information sharing. According to one recent report (Albergotti, 2014), for example, Facebook
now allows users to opt-out of sharing their information with data brokers. This seems good, but
there is no global opt-out; users must instead right-click on individual ads, visit an external site,
and follow that site’s opt-out procedure. Also, apparently, the opt-out preference is stored in a
perversely removes that bit of privacy. Finally, and of course, the opt-out arrived at the same
time as Facebook’s decision to use its users’ web-browsing histories as part of its formula for ad
targeting.
8
One study found that nearly a quarter of respondents regretted mistaken oversharing on Facebook,
reporting loss of important relationships and employment (Wang et al., 2011). In an earlier paper, two colleagues in
human-computer interaction and I made the case that FB’s privacy problems are design-related: see (Hull, Lipford,
& Latulipe, 2011). For more on the HCI implications of privacy, see, for example, (Dourish & Anderson, 2006).
9
For a theoretical development of this concern, see (Schermer, Custers, & van der Hof, 2014).
(3) The Choice isn’t really “Free:” All but the most formalistic models of consent
recognize that nominally uncoerced choices can nonetheless be so constrained that it is difficult
to call them “free.” The problem is when the cost of exercising a choice becomes too high for
someone reasonably to bear. In other words, it needs to be reasonable to forego whatever benefit
the user loses to keep her privacy. As more and more of life moves online, and as more and
more websites condition access on various forms of tracking, it will become increasingly
difficult to resist information disclosure. Specifically, Facebook use has been tied to college
students’ social capital for several years now; asking a student to forego Facebook in order to
preserve her privacy is putting a high and increasing price on privacy preservation (Ellison,
Steinfield, & Lampe, 2007). The difficulty is not just online, as the finding that Facebook users
use the program to maintain offline relationships is robust (Lampe, Ellison, & Steinfield, 2008),
and recent studies document the extent to which for college students in particular (who are also
heavy users of mobile technology, including to access Facebook), “Facebook is probably, even
more than other communication means, the glue in many students’ life” because it facilitates
casual offline interactions for a population that leads “socially complex, nomadic lives”
Although a lot of the research and attention around social media has centered on college
students in the U.S., it should be noted that its importance in other contexts is increasing as well.
For example, there is evidence that Iraqi civilians used Facebook to maintain and develop safe
social interactions (including religiously mandated interactions with relatives during Ramadan)
when actual trips outside of the home were too dangerous to undertake (Semaan & Mark, 2012).
These are the sorts of benefits users have to forego to maintain their privacy.
II. Privacy as a Social Value.
Privacy is not just an individual value. It is also a social value: a society in which some
level of privacy is present is better not just for the individuals who assign value to their privacy,
but for everyone else too. In economic terms, privacy protection has positive externalities that
are not captured by privacy self-management, and which are often at odds with it. Of the social
Society involves a great deal of friction, and we are constantly clashing with each
other. Part of what makes a society a good place in which to live is the extent to
which it allows people freedom from the intrusiveness of others. A society
without privacy protection would be suffocating, and it might not be a place in
which most would want to live. When protecting individual rights, we as a society
decide to hold back in order to receive the benefits of creating the kinds of free
zones for individuals to flourish (Solove, 2007, p. 762).
Most basically, Dourish and Anderson emphasize that privacy functions both to set group
functions independently of the content of that information – and to define (with security), what
risks a social group considers tolerable (Dourish & Anderson, 2006). Privacy, in short, is
essential to the maintenance of the social networks within which individuals can flourish.
Research on Facebook underscores the gap between these social practices and privacy
self-management. The research on SNS indicates that individuals are at least partially motivated
by normative social considerations, even if they do not clearly see the connection between
sharing with friends on Facebook and sharing with Facebook. That is, users try to preserve
privacy norms in their social dealings with one another, but seem less concerned to protect their
privacy against Facebook (Raynes-Goldie, 2010; Young & Quan-Haase, 2013). One study
reported that users were concerned not to overshare, both out of a sense of modesty and out of a
desire not to burden even close ties with excessive communications (Barkhuus, 2012). Another
noted that student perceptions of the appropriateness of sharing particular information in SNS
depended on their perception of whether the context was private or public (Bazarova, 2012).
Students also expressed a strong norm against showing up, unannounced, based on a posting of
location data (Barkhuus & Tashiro, 2010, p. 138). These results suggest that privacy practices
on the ground are highly granular and context-sensitive. Users also attempt to circumvent some
of the data-sharing affordances of SNS software, as I note in the final section. Taken together,
this evidence indicates a real disconnect between privacy self-management theory and how
privacy actually functions; i.e., that “social factors rather than concerns about a company’s
privacy practices may be the primary factor that influences disclosure behavior” (King,
Lampinen, & Smolen, 2011, p. 5).10 Evidence such as this allows one to underscore at least two
points: first, it gives the lie to the meme that “young people do not care about privacy.” Second,
it shows that the notice and consent model, which presents privacy as a purely economic
transaction between a user and a service provider, not only does not capture privacy practices,
More broadly, there are other social benefits to privacy as well. Anita Allen, for
example, notes that “we need expert advice and a range of competently performed services in
order to flourish as citizens” (2011, p. 109; more generally, see 99-122). Most people at some
point require the services of a doctor, an attorney, an accountant, a tax preparer, and so forth.
Without enforced confidentiality in these relationships, the social purposes they serve (like
10
For more evidence of this point – that social norms and social factors like trust among group members –
influence disclosure behavior in the context of SNS, see, e.g., (Nov & Wattal, 2009). One ethnographic study
suggests that users care about social privacy (both engaging in a number of privacy-protective behaviors but also
using lax privacy settings to engage in some social surveillance) but not about what Facebook as a company does
with their information (Raynes-Goldie, 2010). Users also appear to carry social norms from one context into a new
one that they perceive to be analogous (Martin, 2012).
public health and the smooth functioning of the contract system undergirding the economy)
would be undermined. Other social benefits include those surrounding free association; for
example, a landmark Supreme Court ruling prevented Alabama from compelling the NAACP to
submit its membership lists to the state. Such associational privacy unquestionably protects
those in socially disfavored groups (Allen, 2011, pp. 123-156). Along the same lines but more
broadly, as Julie Cohen (2013) argues, privacy serves at least two vital social functions. First,
privacy allows individuals to develop with the independence and space for critical thinking to
engage in meaningful civic participation.11 Second, privacy, and not its absence, turn out to be
critical for innovation, since innovation happens when individuals encounter unexpected
constraints, situations, and opportunities, and then have the space to tinker and experiment with
them.12
Additionally, privacy rules and norms have distributional consequences, and these do not
always happen in predictable ways (Strahilevitz, 2013). One issue is that privacy protection can
present a collective action problem in the form of a classical prisoner’s dilemma. That is,
privacy self-management does not just obscure the social benefits of privacy, it actively subverts
them: particularly once certain tipping points are reached, it will almost always be individually
rational to forego privacy, even as the social benefits of privacy remain collectively rational.
This problem is most strikingly shown in what Scott Peppet calls the “unraveling effect.” Peppet
proposes that, even if we assume that users retain perfect control over the information they
disclose and to whom they disclose it, the economics of data disclosure will tend toward an
inexorable reduction of privacy (Peppet, 2011). Take Progressive Insurance’s “good driver”
11
On this point, see also (Reiman, 1995), pointing out that privacy is therefore required for the
development of the sort of subject who is able to rationally assess and trade away her privacy.
12
Cohen’s claim about innovation – which flies in the face of orthodoxy – has not gone unchallenged.
See,e.g., (Strahilevitz, 2013, p. 2040 n125).
discount for people who are willing to have tracking devices monitor their driving. Good drivers
will have a financial incentive to signal their good driving by volunteering to be monitored.
Marginal drivers will soon face powerful incentives to be included with the good drivers, and so
they will also volunteer to be monitored. Eventually, even terrible drivers, who have every
incentive to avoid monitoring, will be volunteering, in order to avoid the sudden presumption
that non-participants have something terrible to hide. The general point is that users at the top of
whatever category they are being sorted into often have an economic incentive to signal their
superior status. Users near the top then have a reason. And so it goes; eventually, not disclosing
An important feature of the preceding analyses is epistemic: not only are individuals
incapable of knowing the economic value of their privacy, either to themselves or to others (as
noted in the previous section), but the social benefits of privacy are also both substantial and
individuals to make exactly these sorts of calculations, or at least to believe that they should be
making such calculations. Such forced conversion of aleatory, chance moments into risk
calculations – the burden for which is borne entirely by those compelled to make them – is a
management can distract from more basic questions; as Strandburg (2013) points out, there is
actually very little good research that says that data-driven, targeted advertising (where the shoe
ad you looked at yesterday follows you around the Web) is any more effective than contextual
advertising (where you see Ford ads displayed on a car repair page). Context-based advertising
13
See, e.g., (Amoore, 2004) (on the individualization of risk in the workplace); (Binkley, 2009) (parenting
guides); (Cooper, 2012) (increasing contingency of paid work); (Ericson, Barry, & Doyle, 2000) (disaggregation in
insurance); (Feher, 2009) (centrality of human capital and notions of entrepreneurship); (Lazzarato, 2009)
(importance of financialization); (Reddy, 1996) (role of expert knowledge); and (Simon, 2002) (rise of extreme
sports as emblematic). For a very accessible general discussion, see (Brown, 2005)
has none of the privacy difficulties of the data-driven approach, and it is a lot cheaper. That has
not, of course, slowed the development of increasingly complex targeted models, against the
more general backdrop of the neoliberal presentation of the ideal individual as an “entrepreneur
of himself,” maximizing his expected returns based on calculated risks and investments of his
Notice and consent, then, fails completely as a strategy for privacy protection: it presents
users with choices they cannot rationally make, and consistently fails to make legible the many
social reasons why privacy is valuable. Rather than focus on privacy self-management as a
failure, however, I would like in this section to propose that it serves very well as what Foucault
would call a “technique of power.” As such, its empirical success in protecting privacy or not is
less relevant than the ways it presents an information environment in which individuals see
personal good they can easily trade for other desired goods and services.
The general thesis that our usage of information technologies affects not just what we do,
but who we are, has been well-studied in a range of contexts from video games to social media,
friendship are negotiated.14 My choice of Facebook as a case study was in this regard not
accidental, because, as the above indicates, Facebook and other SNS are clearly involved in how
14
For social networking, see the discussion and cites above. For social networking and robotics, see also
(Turkle, 2011). For violent video games, see (McCormick, 2001; Waddington, 2007; Wonderly, 2008). I advance
the thesis in the context of library filtering programs (Hull, 2009) and digital rights management (Hull, 2012). It is
important to note that one does not have to be a reader of Foucault to arrive at this hypothesis: for the “extended
mind” hypothesis and its application to technological environments, see (Clark, 2003), and for an argument
motivated very much by classical liberalism, see (Benkler, 2006).
people develop and maintain their identities, particularly as they engage with others. As Julie
Cohen (2012a) has recently argued in extensive detail, the subject who chooses (or not) privacy
does not come to her informational environment fully-formed. Rather, subjects are partly
information to Facebook will accordingly become a part of who we are; to the extent that
Facebook nudges (or shoves) those choices in a particular direction, Facebook is itself an active
agent in how we understand and constitute ourselves. Cohen suggests that one area of particular
concern is the way that networked environments increasingly rely upon regimes of access
control. From the individual’s point of view, this often manifests itself as a choice between
making oneself increasingly transparent to corporate and governmental entities, or being denied
access to something of importance. Every time that we “click here to accept” or otherwise view
ourselves as making an autonomous choice in that regard, we further naturalize these regimes,
the endpoint of which lies in a mode of governmentality whose objective is not that we desire a
particular thing or not, but that we only have the sorts of desires that can be monetized. An
important component of this is inculcating the belief that all choices are autonomously made by
subjects who emerge fully-grown into an information environment. That is, the belief that
subjects are autonomous and exogenous to their information environments serves to occlude
precisely the extent to which that environment very carefully molds the sorts of subjects that can
refers to “the different modes by which, in our culture, human beings are made subjects”
(Foucault, 1982, p. 208). The details of Foucault’s understanding of subjectification need not
detain us here, but the general point is that human self-understanding (and thus, the “truth” about
who we are) is substantially a product of our interactions with various regimes of social, legal,
and other forms of power, i.e., “the way in which the individual has to constitute this or that part
of himself as the prime material of his moral conduct” (Foucault, 1985, p. 26). How we achieve
this he calls the “mode” of subjection, which is “the way in which the individual establishes his
relation to the rule and recognizes himself as obliged to put it into practice” (27).
There are many different ways of achieving this relation; among those most relevant in
the context of privacy are recognizing oneself as part of a group that practices certain methods of
subjectification. The most obvious instances are the ways that friendship changes as it is
mediated online, and that online information facilitates social surveillance of users by one
another. But there are more subtle ways, as well. Not only do Facebook users repeatedly accept
the sharing of their (and their friends’) information with third parties, they are also forced to
navigate the complex privacy policies of the site, including (until recently) its complete lack of
differently online and off, and users must be adept at navigating the differences.15 Users must
also develop social rituals for avoiding and managing the inevitable privacy breakdowns.
Teenagers on MySpace, for example, tended to have more than one profile – one for parents, and
one for friends (boyd, 2007). They also become adept at esotericism, writing mateiral that can
be read one way by friends and another by parents (Marwick & boyd, 2014). At an even more
fundamental level, the structure of SNS rewards those who disclose information and nudges
those who do not to greater disclosures. The point is underscored in a study of Canadian college
15
On this, see, for example (Binder, Howes, & Sutcliffe, 2009) (finding that “social space provided by SNS
typically lacks boundaries and segmentation that are characteristics of offline networks. Boundaries between social
spheres occur naturally in offline networks, mostly due to spatial/temporal separation of contacts. This elaborate
structure is dropped in online environments” (966));
students, which found that “disclosure … becomes an aspect of identity construction, and that
construction is linked with popularity: the people who are most popular are those whose identity
construction is most actively participated in by others” (Christofides, Muise, & Desmarais, 2009,
p. 343).
A moral action tends toward its own accomplishment; but it also aims beyond the
latter, to the estalbishing of a moral conduct that commits and individual, not only
to other activities always in conformity with values and rules, but to a certain
mode of being, a mode of being characteristic of the ethical subject (Foucault,
1985, p. 28).
In other words, users are presented with a repeated choice: more privacy or less? Everything
about the context of that choice encourages them to answer “less.” This in turn habituates them
At the same time, each iteration of the norm of less privacy further entrenches its status
as a norm. To the extent that individuals’ privacy decisions are context-dependent, this means
that individuals will make fewer privacy-protective decisions. Successful iterations in turn
support a larger social narrative that privacy is primarily an antiquated roadblock on the path to
greater innovation (Cohen, 2013). Even if that narrative is ultimately untrue, it has the further
function of neutralizing and depoliticizing the distributional effects of treating users’ information
as sources of capital accumulation. In other words, it also obscures that transactions aren’t even,
at the end of the day, about privacy: they are about websites obtaining information which they
can then sell to other data brokers (Cohen, 2012b). The result is a paradigmatic instance of the
operation of power, in the sense given by Foucault, where “it is a total structure of actions
brought to bear upon possible actions; it incites, it induces, it seduces, it makes easier or more
In the present context, the various interconnected logics of power construe individuals as
primarily economic actors, as instantiations of homo economicus. As Foucault (2008) notes, the
emergence of American neoliberalism in theorists such as Becker was perhaps most distinctive
in its insistence that all aspects of social existence could be modeled on economic cost-benefit
analysis. Of course, as the privacy paradox indicates, not all of life seems immediately amenable
to such analysis. Behavioral economics, which nuances the model, becomes at this moment a
part of the problem because it presents economically irrational behavior as a deviation from
correct, rational decisionmaking, Presenting the deviation as irrational then allows the problem to
be packaged as poor risk-modeling by data subjects, who can be nudged into doing better. The
process thus repeats the presentation of privacy as an economic decision, both occluding the
actual uncertainties users face and offering the possibility of corrective strategies on the part of
websites wanting to elicit “rational” behavior.16 As John McMahon notes, the behavioral
economic strategy thus serves to depoliticize social problems. In the present context, this means
both the further erosion of our ability to see privacy as a social and political construct and the
further subsumption of social life into markets. As McMahon puts it, “if there is a rationality to
the irrationality of economic actors, then the market can respond to, change and/or create
The anxious Facebook user who constantly monitors her online behavior to avoid getting
fired thus enacts only one point in a wide set of techniques by which individuals are conditioned
to view themselves as both profitably on permanent display, and individually responsible for
16
It is true that users can be nudged to more privacy-protective behavior, but when undertaken in the
therapeutic terms of behavioral economics, this nudging serves to even further entrench the framing of privacy as a
problem for economic rationality.
what others see. This, then, is the real significance of privacy self-management. It is not that it
does or does not protect individual privacy; it is that it construes individuals as consumers who
make economic decisions about the value of their privacy as an alienable commodity. In so
rational consumers, even in areas (like sharing app games with friends, or, indeed, with regard to
“friendship” itself) that one might otherwise not initially appear to be part of the market. 17
What, then, of the “privacy paradox,” when individuals say they value their privacy, but
then behave as if they do not? Foucault proposes that power and resistance always occur
together, and proposes that “in order to understand what power relations are about, perhaps we
should investigate the forms of resistance and attempts made to dissociate these relations (1982,
p. 211). In that vein, it seems to me that one undertheorized way to understand the privacy
paradox is as a besieged form of resistance, of an effort, as Foucault puts it, “to refuse what we
are” (Foucault, 1982, p. 216). Consumer outrage at Facebook’s privacy practices, and its
research into its users’ behavior, such as the emotional contagion survey, is another example; in
at least one instance, however, it was sufficient to get a feature (the reviled Beacon) removed
(boyd, 2008). One should also note that the resistance is not merely verbal, as people install ad
blocking software, delete cookies, and so forth (Hoofnagle & Whittington, 2014). As noted
Facebook, including deleting and undeleting their account to hide it from view when certain
individuals are likely to be watching, just as college students laboriously untag themselves from
17
For the argument that the EU GDPR creates similar myths – both that privacy is more protected than it is,
and that subjects are more empowered than they are – see (Blume, 2014; Koops, 2014).
photos after an evening out (Marwick & boyd, 2014). Predictably, corporations actively try to
thwart these efforts at self-help: when consumers delete cookies, for example, advertisers hide
them in flash videos. Not only does Facebook demand that users be their offline selves on the
site, it both makes it notoriously hard to change privacy settings, and it changes the available
settings routinely, a practice that dramatically increases the cost of efforts at privacy self-
management. As noted above, Facebook users work very hard to protect themselves from social
surveillance, but the institutional setting is one that makes it nearly impossible to protect
themselves from surveillance by Facebook. Thus they tend toward social strategies like cryptic
messaging rather than attempting to navigate the site’s privacy settings (Barkhuus, 2012). It is
no surprise that users appear to have given up on the task of protecting themselves from
Facebook, but outrage over Beacon and the emotional contagion study suggests both that users
do in fact care, and that the site does a good job making it nearly impossible to achieve
Resistance, of course, is not what any official narrative says: we hear instead that, in fact,
consumers do not value their privacy. Privacy self-management abets such an official narrative
in at least two ways. First, by viewing the loss or preservation of privacy as a commercial
transaction, and then treating the transaction as revealing consumer preference, the privacy self-
the enthusiastic sharing of information and its grudging release appear as the preference of an
economically rational consumer. That is, treating economic rationality as the truth of
subjectivity makes it possible to propose that no matter what someone does, it can and should be
understood as presenting her revealed preferences. Even more importantly, it assumes that those
preferences have been formed autonomously, outside of the context in which they appear.
Second, the very binarization of privacy self-management, with its endless disjunctive
share (with everyone) or not (with anyone) contributes to the problem.18 In Foucauldian-based
theory, iterability, the fact that norms have to be repeatedly enacted socially, generates an
inevitable resistance. This is most clear in norms having to do with embodiment, as no one
actually inhabits “the body,” and so the simple fact of living in the world generates a tension
between ascribed norms and actual embodied life (Hayles, 1999). Analogously, notice-and-
consent regimes’ erasure of the granularity and social embededness of privacy practices works to
foreclose that entire sphere of potential resistance: one either accepts the privacy bargain in its
entirety, or no. It is as if one’s choices for embodiment were between perfectly having the
socially prescribed body, or not being embodied at all. This is part of why it is important to look
at the actual social practices of Facebook users and the ways they push back against the norms of
openness they are asked to iterate. It is also part of why insisting on the evidentiary value of
even and especially when it is used to explain apparent resistance to further disclosure. To hope
to make any progress at all toward protecting privacy, it is important not just to improve users’
ability to control their information; it is important to dismantle the epistemic and normative
power of the claim that privacy is a matter of individual control of information. As Cohen puts
it, “to grapple with the problem of whether information privacy claims are as deeply irrational as
they can sometimes appear, we must bring the almost-invisible into critical focus (2012b, p.
18
Koops suggests that the proposed EU regulations are stuck in a similar binarism: “EU data protection law
applies an all-or-nothing approach: data is either personal data (triggering the whole regime), or it is not (triggering
nothing), but it cannot be something in between or something else” (2014, 257).
107). Our model of privacy, and the way it facilitates privacy’s inevitable erosion, are part of
our problem.
References