You are on page 1of 49

GRIFFITH UNIVERSITY

GRIFFITH LAW SCHOOL

Social Science Research Network Legal Scholarship


Network

Griffith Law School Research Paper No. 21-10

Elizabeth Englezos

Forget Consent? Answering the Challenges of


Digital Space

Electronic copy available at: https://ssrn.com/abstract=3791056


Forget consent? Answering the
challenges of the digital space.
BY ELIZABETH ENGLEZOS1

Our world comprises of both material and immaterial


parts – the body within the material part, while data and
ideas in the other. Personal data coalesces to form a data
entity in the digital space. Attempts to address the
challenges of data proliferation continue to focus on
consent as a means to protect the material person’s
privacy. Yet, the digital space is vastly different to the
material space. This article builds on the semiotic
theories of Peirce and Saussure to position signification
as the cause for the failure of traditional consent in digital
space. The article concludes that the digital space
requires a specialized approach. It positions signification
as the potential locus operandi for future reforms and in
doing so invites further discussion.

Introduction

Our world comprises of both material and immaterial parts – the body

within the material part, while data and ideas in the other. As humans

“being” we exist in a material form within the physical space. Within that

physical space, we have context, nuances and a history based on temporal

events that influence our identity, who we are and whom we become. As

material beings, humans cannot occupy the immaterial space in-person and

it is within this immaterial space that the digital space exists. As physical

1
The published version of this article is available at in the Journal of Information Ethics
(2020), Vol. 29, Iss. 2, pp 46-69.

Electronic copy available at: https://ssrn.com/abstract=3791056


beings excluded from the immaterial space, humans cannot exist within the

digital space. Presence within the digital space requires the substitution of

the material person by something immaterial.

Online, individuals are represented according to the contents of their data

entity. A digital substitute is constructed out of this data and it is via this

substitute that the individual interacts online. Essentially, this substitute

becomes the actor, leaving the individual one-step removed from the forum

of interaction. Once the individual gives their initial consent, interactions

between this substitute and other third parties occur and continue without

the further involvement the individual. The following section begins with a

(re-)introduction to semiotic theory and the process of signification which

enables humankind to bridge the real-world-digital divide. The section, “A

challenge to autonomy” considers the quality of consent as provided by

material and physical person and its relevance to the digital space. Followed

by “What is consent?” which considers consent as a concession of power

and examines the challenges created by data proliferation. If consent exists

as a means through which the material person can delimit external access to

their person and personal materials, the power imbalance between material

persons and service providers in the digital space undermines consent so

that material consent becomes irrelevant within the digital space. The

article concludes that digital space requires a specialized legal and ethical

approach. It positions signification as a new locus operandi for legal

reforms and requires a reframing of digital information ethics.

Electronic copy available at: https://ssrn.com/abstract=3791056


Digital signs for real life.

If we are to get to the root cause of law’s failure to address the issues

presented by digital space, the translation of the material person into a

digital and immaterial sign demands closer attention. The material person

must undergo some form of signification to exist in digital space. According

to Daniel Chandler, a sign is something can be interpreted as “signifying”

something else (Chandler, 505). Without delving too deeply into semiotic

theory, one must first note a few things before we proceed.

Linguist Ferdinand de Saussure first used the terms “signified” and

“signifier” to represent a concept and its related sound pattern in his

“Course in General Linguistics” (Saussure, 1916).1 According to Saussure,

the signifier and signified combined or interacted to produce a sign

(Saussure, 2013 [98]). For Saussure then, the sign was the combination of

the idea of an object (rather than the object itself) and the word used to

denote that same object (Chandler 516). As the Saussurean sign is

commonly misapplied in contemporary discourse, it is important to spend

some time clarifying Saussure’s intent before moving any further forward.

As a linguist, Saussure states that language owes its versatility to the

arbitrariness of signs: “there is no internal connexion … between the idea

“sister” and the French sequence of sounds s-ö-r” (Saussure, 2013 [100]). It

is the arbitrariness of the letters s, ö, and r, as well as their order and

combinations with each other and other letters, that allows combinations of

Electronic copy available at: https://ssrn.com/abstract=3791056


sounds or letters of script to create all words and sounds that have come

into existence as part of language (Saussure, 2013 100-101). Saussure,

therefore, used a dyadic model of signifier and signified to explain how a

sign could be used linguistically to represent the idea of a particular object.

Charles Sanders Peirce, a philosopher and logician, focused on the effect of

signs. Peirce adopted a triadic model for signification (see Fig 1). The

Peircean model of object, representamen (similar to Saussure’s signifier)

and interpretant provided a mechanism to show the impact of interpretation

when considering a sign. Convention and context inform interpretation and

the impression generated by the sign.

Representamen

Object Interpretant

Fig 1. Peirce’s Model as a semiotic triangle2

For Peirce, the object referred to the object of signification, or that which is

referred to. The representamen is “something which stands to somebody

Electronic copy available at: https://ssrn.com/abstract=3791056


for something in some respect or capacity” (Peirce, 1931-58 2.228) and is

the equivalent of Saussure’s “signifier” or word (Chandler 856). The

interpretant is not the party that interprets the sign, but the impression

“create[d] in the mind of that person” (Peirce 1931-58 2.228).

Take, for example, the representamen “ə”. Some will see this symbol and

think of it as a phonetic symbol that assists correct pronunciation. Other’s

will know it specifically as the “mid central vowel” in the International

Phonetic Alphabet, while others may simply interpret it as an upside down

“e” and consider it to be an error. In each of these instances, a different

interpretant is presented, first a sound, second a phonetic device, and thirdly

an error.

This representamen produces different interpretants and are therefore

understood as signifying different objects. The object, depending on

context may form part of a dictionary definition, a phonetic device, or

printed on a piece of paper that we are viewing upside-down. Peirce’s

interpretant is shaped by context, community praxis, and convention and

informs our understanding of the object. For Peirce, the meaning of a sign

is “arises from its interpretation” (Chandler 908). Applying Peirce’s model

to the signification of the physical person in digital space provides us with a

new perspective of the power relations within digital space. This

perspective not only explains why the law of consent and privacy have

failed to adequately address the challenges of digital space, but also

Electronic copy available at: https://ssrn.com/abstract=3791056


illustrates the importance of a new specialized information ethics that

recognizes these power relations.

In the digital space, personal data3 coalesces to form a data entity. The data

entity is a formless repository that contains all data related to the material

person. Data is included in this data entity regardless of source or format.

From here, algorithms select and draw from available and readable data in

the repository to create a signifier. Many representamen are created. Each

depends on an algorithm. This representamen then substitutes for the

material person – not as they exist in the material space but as product of

algorithms and applied to a specific task, decision-making process, and so

on. Once substituted, representamen can interact with third parties in place

of the material person. These interactions shape who we are in a material

sense. Representamen also inform how we are perceived by others. The

content and opportunities we receive in the material space are also

influenced by signification. As data proliferates and increasing proportions

of interpersonal dealings occur online, representamen ascend to the position

of actor. Their interactions become more determinant than those of the

material person.

Attempts to address the questions of information ethics and the challenges

of data proliferation continue to focus on consent. “Consent” is a composite

of the Latin con (together) and sentire (feel or think) and indicates an

“accord” or a “feeling together” of the parties (Barnhart 1988). As a means

Electronic copy available at: https://ssrn.com/abstract=3791056


to protect the material person’s privacy, consent should provide material

persons with control over their data and its use (Englezos 2019). Today,

valid consent is informed by adequate knowledge, notice and provided by

choice. While consent’s Latin origins seem to imply that consent is

bilateral, material persons may withdraw their consent at any time. In this

way, consent is a unilateral act of concession to another, regardless of the

fact that in most instances, consent is exchanged by the parties to an

agreement.

Before progressing, a note on definitions. In this article, “data” refers to

raw data that is decontextualized and uninterpreted. “Personal data” is data

that relates to a particular person. “Personally-Identifying Information”

(“PII”) data links to an identifiable material person. “Content” is used

broadly to represent any published material – such as social media posts,

government blogs, news sites and so on. “Material consent” refers to the

ideal of consent and is informed by history, popular culture and doctrine.

“Appearance” is used metaphorically to indicate how a representamen or

material person appears to others in the digital space. Appearance should

be distinguished from “representations” material persons make about

themselves. In some instances, appearances and representations may be

similar, however, this will not always be the case. “Outcome” is used to

represent the results and consequences of algorithmic determinations or

automated decisions. “Signification” refers to the process by which a

representamen is created and substituted for the object. “Digital translation”

Electronic copy available at: https://ssrn.com/abstract=3791056


relates more specifically to the process of translating the physical person, as

object, into the physical space as a representamen.

This article builds on the concepts of Peirce’s triadic model to explain the

realities of the contemporary digital space. The diagram below (fig. 2)

builds on figure 1 by showing the division between the digital and physical

space (“digital-physical divide”). However, there is not one but two

interpretants in this figure. The presence of two interpretants highlights the

reality of signification within digital space – not only does signification

generate impressions in the digital space, signification may also inform

impressions in the physical space.


DIGITAL SPACE

Representamen
PHYSICAL SPACE

digit
al –
phys
ical
divid
e

Interpretant
Object
Interpretant

Fig 2. Peirce’s semiotic triangle in digital and physical space

Electronic copy available at: https://ssrn.com/abstract=3791056


Representation in the digital space is further complicated by the process of

signification. The physical person is not represented as a digital icon of its

physical self. Instead, the physical individual is translated into a separate

digital representamen according to algorithmic design. The following

diagram (fig. 3) depicts a simplified version of this process.

Electronic copy available at: https://ssrn.com/abstract=3791056


10
Data Entity Algorithmic Digital Impression created from
Modelling translation digital translation

60%
30%
* representamen x
10%
interpretant x
algorithm x 1

DIGITAL SPACE
70%
30%
* representamen y
interpretant y
algorithm y 1

25%
50%
* representamen z
25%
algorithm z
interpretant z
1

interpretantz
2

object interpretanty
2

interpretantx
2

Fig. 3 The role of algorithms in signification


Key:

PHYSICAL SPACE

Electronic copy available at: https://ssrn.com/abstract=3791056


Personal Data Government Data Sensor Data Data Broker
*
In the digital space, the object is either a source or subject of uploaded data.

Data accumulates to form a data entity from which algorithms select

particular data. Algorithms process selected data to estimate, measure or

predict the qualities of the physical person. Signification, therefore, depends

on data selection and the priorities of the algorithm involved.

In this diagram we see the production of three representamen from one data

entity. Algorithms have produced three different representamen based on

the same object (or, in this case, physical person) according to the

algorithm’s programming. As with algorithms themselves, the diagram

does not identify the data selected from the data entity. The algorithm

producing representamen x prefers passive modes of data collection (such as

those provided by data brokers (Û) and sensor data (r)) while still giving

some weight to government data (£). Representamen y relies almost

exclusively on the data available from data brokers (Û) and a smaller

percentage of sensor data (r), while representamen z gives equal weight to

selected personal data (Ÿ) and sensor data (r) while prioritizing government

data (£) above both of these.

Each representamen will differ from other representamen as well as the

object and produce different interpretants. Each interpretant creates a

different impression of the physical person through the process of

signification. These impressions of the physical person – their potential,

capacity, capability, trustworthiness and so on, exist as substitutes for the

physical person in digital space. Unrelated physical persons who receive

11

Electronic copy available at: https://ssrn.com/abstract=3791056


these representamen in physical space will also form an impression of the

object as they exist in physical space albeit based heavily (if not solely) on a

digital representamen.

At first this analysis may appear an exercise in academic abstraction.

However, each of these interpretants will have some impact on physical and

digital outcomes for the material person.

This article does not claim that all forms of signification are unwanted or

negative, but argues instead for an awareness of the impact of algorithms on

the physical person. Until now, we have relied on consent as a means to

ensure the protection of personal autonomy. However, we have made little

headway despite numerous efforts to address the challenges of the digital

space through consent and privacy reforms. Consent fails because the

physical person gives consent to share their data, yet has no input or

oversight as to their use once signification has taken place.

The digital vs the physical space

The digital space is not easily defined. It seems infinite and ethereal and

consists only of the immaterial. Physical repositories store data in the

physical space – yet data’s value comes from its application outside of that

repository in immaterial form. Physical persons know that digitized data,

12

Electronic copy available at: https://ssrn.com/abstract=3791056


content, or information becomes interminably available. Physical persons

are also aware that the replication or repetition of data occurs without loss

of quality. Interminability and high-fidelity reproduction are possible

because the digital space consists only of immaterial things. Immaterial

things can be accessible everywhere and anywhere almost instantaneously.

Most physical persons understand that their associated data or other

intangible property is open to unknown access by third parties. However,

the physical person will not know when such replication or access takes

place.

The concept of an immaterial and digital substitute may be difficult to

grasp. Nonetheless, we know that humans and other material beings cannot

exist in the digital space. Nothing material can. Presence in the digital space

requires translation of the material into a digital code or format in line with

the technology (Koch 2005). As a consequence, interpersonal dealings

increasingly occur in digital space and exclude the human form. In short,

interpersonal dealings in the digital space. An interpretant created through

algorithmic means lacks the sophistication, nuance and detail of the physical

person, yet we know that important decisions that influence the future of

physical persons are based on algorithmic determinations.

13

Electronic copy available at: https://ssrn.com/abstract=3791056


A challenge to autonomy

To translate a physical person into a digital interpretant requires a degree of

dismantling of the physical person that fractures identity into numerous

unknown and incomplete derivatives. Accurate representation becomes

impossible but mediates subsequent data, information and content flows.

Before the first fracture of individual identity, data is stripped of context and

uploaded into digital space. Without context, the data loses nuance and

presents an already incomplete picture (see also Ibrahim 2017). Physical

persons cannot know which data is selected or whether this data (or its

interpretant) is accurate. Physical persons cannot erase data that is

misleading, embarrassing or damaging, or remove it from our data entity.4

Data is interminable and irretrievable and presumed to be true. The physical

person and their incorporeal essence are not accurately captured by data

points. As a consequence, potentially inaccurate data remains part of the

data entity and forms the basis of further determinations.

Alain Supiot refers to personality as “the generic concept in which body and

soul are held together (Supiot 2017).” However, translation into digital

space requires a separation of the two. Legal personality refers to our right

to develop our personalities and be recognized by others according to that

personality (Supiot 2017), but this is no longer possible. Individuals

construct their identity inwardly, but also outwardly through self-display

14

Electronic copy available at: https://ssrn.com/abstract=3791056


(Belk 2013). Identity is not limited to our behavior. Our capacity to “keep a

particular narrative going” about ourselves allows us to define who we are

(Belk 2013). Physical persons are relatively free to represent themselves as

they see fit: as their ideal selves, aspirational selves, or possible selves (Belk

2013). However, the process of translation to digital form reduces the

physical person to representative (but incomplete) composite parts. Once

signified, the material person loses any control of their identity or

appearance.

Our data entity expands and expands as we upload more and more data.

However, the expanse of data is unwieldy. A context-dependent interpretant

has far more utility within the digital space. Algorithms are designed to

select the most relevant characteristics from the data entity according to a

specific context. Our financial signifier will be based on relevant to the

financial data (such as income, debts, assets) while our employability

signifier may depend more heavily on credit rating (Citron and Pasquale

2014), health (Peppet 2014), social media interactions and our address. The

number of available signifiers is limited only by the computations used to

them while the dataset continues to grow.

At the same time, automated decisions based on algorithms occur without

the physical person’s knowledge. The physicality of the person is redundant.

Once de-linked from the physical person, the interpretant becomes the actor.

Algorithms personalize content and opportunities and outcomes according

15

Electronic copy available at: https://ssrn.com/abstract=3791056


to the interpretant. Consequently, this personalization provides the physical

person with a simulated digital version of reality. Physical consent is not

relevant or necessary because the physical person is no longer the subject of

these interactions. Instead, the physical person becomes the object affected

by the actions of the interpretant as the subject. Subjectivity and bias within

algorithms further aggravate the disjuncture between simulated reality and

what is actually real. Some examples illustrate the problem.

Ongoing advances in online data collection (Peacock 2014), the expansion

of sources (from government to commercial) and the “unprecedented

deliberate “sharing” of personal information” ensures a growing repository

for each data entity (Grafanaki 2017, Peacock 2014). While the increased

availability and variety of data sources may improve the potential quality of

the data entity, the accuracy of any interpretant remains dependent on the

algorithms used. The data broker industry generates as much as $200 billion

annually by providing known and unknown third parties with access to

available data (Sadowski n.d.). Physical persons receive a limited and

disproportionately low share of the benefits (Tene and Polonetsky 2013).

This process is commonly referred to as “data mining”. However, Jathan

Sadowski notes that the term “mining” implies the data already exists. In

reality, the data and signifiers produced through data mining are not mined

but “manufactured” (Sadowski n.d.).

16

Electronic copy available at: https://ssrn.com/abstract=3791056


As data objects, future determinations will inform the way the physical

person develops. Data brokers begin with raw data that is processed to

create a representamen. Representamen are evaluated and categorized to

produce an interpretant that is allocated into to “relevant” data segments,

aggregates or groups (Grafanaki 2017). The physical person is thus

categorized without input or oversight. The lack of awareness as to when

data is stored, accessed or used makes physical persons easy targets for

“data mining”. The data produced provide additional input into the physical

person’s data entity.

This article does not suggest that there are no benefits to algorithms or

personalization. An estimated 2.5 quintillion bytes of data are created each

day (Marr 2018). This data would be useless without and effective means to

search or sort through content. Algorithms are an essential component of

this process. It is the use of algorithmic processes to substitute and

categorize digital interpretants for physical persons that warrants closer

attention.

Prediction, prescription and polarization

Interpretants provide value by predicting the probable behaviors or

propensities of material persons (Diakopoulos 2015). These predictions use

select data points and rely on algorithms to qualitatively classify a

representamen or predict future outputs through regression analysis, or

classify a representamen based on a particular data point (Brownlee 2017,

17

Electronic copy available at: https://ssrn.com/abstract=3791056


Rendle et al. 2009)). Classification categorizes the object into a particular

group according to the data point (or points) used (such as gender, marital

status and so on). Regression analysis estimates particular aspects or

physical characteristics based on the interpretant (such as age, weight,

income and so on). The interpretant's appearance is, therefore, shaped by

the algorithm's designer (Harper 2017). As we saw in fig. 3, the designer

also selects which data the algorithm will target. However, a preference for

a particular data source (such as government, sensor, and so on) merely

means that a higher proportion of the preferred data is considered than data

from other sources. Regardless of preference, this cannot present a

complete picture of the physical person. In addition, designers must

prioritize some characteristics and qualities over others (Harper 2017). We

have already touched on data selection as the first point of potential bias

within algorithmic models. The second point of potential bias occurs with

the use of subjectively determined values to inform calculations of

probability. We must delve a little deeper into the mechanics of probability

theories to properly appreciate why this subjective value has the potential to

cause bias in algorithmic predictions.

Originally described as “the doctrine of chances” probability theory

provides mathematical formulae to predict the likelihood of certain events

(Bayes 1763). Thomas Bayes made an important contribution to the

evolution of probability theory by showing that one could infer the

probability of a future event by considering the frequency of similar past

18

Electronic copy available at: https://ssrn.com/abstract=3791056


events (Debnath and Basu 2015). Bayes’ theorem can also predict the

occurrence of one event in correlation with another (Sarwar et al. n.d.). This

is known as conditional probability and enables us to predict the likelihood

that “event B” will occur if “event A” happens (Shynk n.d.).

No doubt the reader has seen conditional probability predictions in action on

platforms such as Netflix (see Gomez-Uribe and Hunt 2015) or

Amazon.com. E-commerce sites often rely on probabilistic determinations

and include similar recommender systems. The Known Nearest Neighbor

(“k-NN”) model also streamlines purchasing processes by identifying and

recommending products the purchaser is more likely to want (Sarwar et al.

n.d.). Most recommendation systems work by finding content similar to past

purchases (“content-based filters”) or by aggregating users with similar

tastes and recommending similar items (“Collaborative filtering systems”)

(Condliff et al. 1999). The ability to successfully predict whether readers of

“Book A” will like or buy “Book B” has contributed significantly to the

success of Amazon Books (Shynk n.d.).

Most of these algorithms are informed by a subjective value. Bayes’

Theorem relies on a “Bayesian Prior” that gives numerical expression to the

of a particular event (Dowe et al. 2013). This prior relies on one party’s

estimate yet is key to the algorithm’s success. As a result, the outcomes of

prediction depend on a subjectively determined “prior” (Dowe et al. 2013).

Any bias on the part of the determining party will be imported into the

19

Electronic copy available at: https://ssrn.com/abstract=3791056


algorithm and its results. k-NN models rely on labelled input data to “learn”

how to recognize the appropriate output (Harrison 2018). “Labelling”

requires similarly subjective input. For example, Yi Feng Wen et al has

produced a database that provides a “normative range of facial

measurements” according to ethnicity (Wen et al. 2015). Nonetheless, the

possibility of “misclassification” (or mislabeling) cannot be ruled out and is

further complicated by mixed ethnicity/race subjects (Wen et al. 2015).

Even the placement of “neighborhood” boundaries – that is, where one

group ends, and another begins – will have a significant impact on

subsequent results (Altman 1992). While this effect can be mitigated,

mitigation will again depend on the algorithm’s design – and therefore the

algorithm designer’s choices (Altman 1992).

Subjective values are likely to improve in accuracy with each application.

Nonetheless, we can never have enough data to know everything correctly

and in advance (Polanyi 1886-1964). Algorithms and algorithmic bias have

a key role in determining the “appearance” of the signifiers that substitute

for the material person in digital space. Algorithms rely on the presumption

that what we have done before we will do again. Such presumptions create

important issues in the digital space for two reasons. Most significantly, this

presumption assumes that the data and subsequent interpretant is accurate

yet allows limited (if any) opportunity for review. Unlike real life where we

are free to break with tradition, algorithms also prescribe what we see and

do based on models of what we will probably want. In real-life we can use

20

Electronic copy available at: https://ssrn.com/abstract=3791056


public transport for our commute instead of driving, shop in a different

store, or talk to someone with whom we would not normally interact.

However, in the digital space, outlying behaviors, characteristics or choices

are pre-emptively dismissed by algorithmic models as unlikely and,

therefore, rendered invisible to us. Thus, in digital space, we are more likely

to act just as probability predicted. In short, interpretants are born out of

probabilistic determinations and, therefore, subject to bias and inaccuracy,

yet these interpretants inform choices made on the physical person’s behalf

in the digital space. These digital choices have real world outcomes for the

physical person in the physical world.

A new perspective requires a new ethical approach.

Once we recognize how the physical person becomes excluded from

consultation and subsequent threats to personal autonomy, we can demand

new ethics to address this reality. When algorithms turn our search results,

clicks and interactions with technology “into a conversable code,” this code

is stored for later “interrogation and analysis” (Harper 2017). Raw data is

processed by algorithmic methods and converted into indicators of future

behavior or the comparative “worth” of material persons. At the same time,

algorithms lack transparency (Pasquale 2015, Ziewitz 2016), yet may affect

our ability to seek education advantages, work opportunities, receive

welfare, insurance or credit, and so on (Gillespie 2017, Grafanaki 2017).

ZipRecruiter is a straightforward example of an online service that uses

algorithms to streamline otherwise complex processes. ZipRecruiter

21

Electronic copy available at: https://ssrn.com/abstract=3791056


employs “matching technology” to request suitable applicants (registered

with ZipRecruiter and other “job boards”) submit applications for

appropriate job opportunities. Once applications are received, ZipRecruiter

“analyses each one and spotlights the top candidates.”5 Only those

applications deemed suitable are forwarded to the employer. Thus, material

persons can be excluded from contention without knowing they have been

assessed while others are prioritized.

To manage vast quantities of data, selection and deselection processes must

short-list the available data into relevant and manageable subsets. Some

data sets are excluded by design, due to computational errors or commercial

necessity but the criteria are unknowable (Harper 2017). Algorithms

consider some data more probative than others and priorities that data set.

However, prioritization occurs within an already curated subset of data and

the potential for cumulative bias grows with each inclusion or exclusion.

Algorithms do more than include or exclude material persons based on their

interpretants. Behavioral marketing allows advertisers to target campaigns

specifically to certain material persons based on their individual

interpretants (Amer and Noujaim 2019). These campaigns are a lucrative

financial resource for internet-based companies (Grafanaki 2017). The

threat lies not in predictions or advertisements that “miss the mark” but in

those close enough to divert the attention of physical person elsewhere.

Targeting may reinforce a belief or predisposition of the physical person

22

Electronic copy available at: https://ssrn.com/abstract=3791056


without their knowledge or gradually alter their perception. These acts

undermine individual autonomy by encouraging the person down a

particular intellectual or emotional path.

Data without context is also likely to conflate the temporal with the static. A

period of depression might see us looking for support on noticeboards we

would not usually visit, but as this behavior increases, we see more of this

content – even though our feelings may be transitory. The effect of these

clicks becomes more fixed with each click or scroll that affirms the

prediction that this is the content we want. These clicks become a

permanent part of the material person’s data entity, regardless of the limited

length of their relevance. In addition to exclusion, prioritization and

targeting, algorithms may also increase discrimination based on past (and

subsequently irrelevant) behaviors.

In his 2011 work, “The Filer Bubble: What the Internet is hiding from you”,

Eli Pariser explores the relationship between the categorization of the

material person and the prescription of digital content (Pariser 2011).

Pariser notes three key aspects of the filter bubble: isolation, invisibility,

and our lack of choice as to whether we become part of that bubble.

Pariser’s hypothesis (Pariser 2011), when combined with the digital

signifier introduced in this article, provides an important illustration of the

operation and effect of the interpretant. “A” (the physical person) is

available for consideration based on their data entity – “data entity A.”

23

Electronic copy available at: https://ssrn.com/abstract=3791056


From data entity A, algorithms build “Interpretant A” which is then

categorized and allocated into a group, “Group 1.” Probability theory

predicts the “most relevant” content for Interpretant A and Group 1 to

produce an estimate of content desired most useful or relevant to the

material person. For instance:

1. Interpretant A will probably be interested in content similar to what

they have found interesting before; and,

2. Content sought out by other Group 1 signifiers is additionally likely

to appeal to those in Group 1 (and, by corollary, Interpretant A) in

future.

If we imagine the digital space and its infinite webs of data and information

containing a multitude of theories, opinions and concepts, then we can also

imagine that there is a sphere of information “most commonly known,”

“most commonly sought” or “most often relevant” to the material

person. This sphere of “most commonly known” exists within a larger

encompassing sphere (or concentric spheres) which contains lesser known,

less sought after, or perhaps even highly controversial and prohibited

subject matter. Regardless of the location of content within the inner or

outer spheres, within them are topics or beliefs and poles of opinion (pro- or

anti-) within them.

A similar hierarchy exists within each topic. The more popular posts, clicks,

or opinions will be positioned in the central ring. Less popular content is

24

Electronic copy available at: https://ssrn.com/abstract=3791056


moved to the outer extremes of the group. Let’s assume that Interpretant A

and Group 1 are particularly interested in “Belief X”. Interpretant A (within

Group 1) will exist somewhere along the continuum between pro- and anti-

Belief X. If Interpretant A and Group 1 are in favor of Belief X, they will

receive an increasing proportion of pro Belief X (“Pro-X”) subject matter.

Pariser also notes that human beings tend to choose the “least

objectionable” option (Pariser, 2010). If saturated with disagreeable content,

we engage with the least objectionable version. The quantity of clicks might

be the same as that of someone who genuinely agrees with the content.

However, probability theory cannot distinguish between high- and low-

quality engagement. The quantity of clicks will not represent the true nature

of the material person. Probability will infer that the group prefers Pro-X

content. Consequently, Interpretant A and Group 1 will move further along

the continuum towards Pro-X and see less and less anti Belief X (“Anti-

X”). A culture of likes, upvotes, shares and so on will create a positive

feedback loop that rewards behaviors that conform to the Group 1’s norms.

As a corollary, the group’s shared view is further entrenched and are

exposed to decreasing proportions of Anti-X content.

Our tendency to engage with the least objectionable can unknowingly trap

us in a pattern we would otherwise seek to break, or which slowly diverts us

to content we are not actively seeking (Sulkowski and Picciolini 2018). Our

exposure to digital content influences the appearance of interpretants and

25

Electronic copy available at: https://ssrn.com/abstract=3791056


the attitudes of material persons in physical space. This appearance also

affects the way material persons are seen by other material persons within

that same space.

Algorithms will look for parallels between our interpretants and link parties

that might not meet otherwise (Mantelero 2016). By grouping like with like,

the process closes individuals off to opposing viewpoints. Popular opinions

become easily entrenched while minority views go unheard (Nikolov et al.

2019, Harper 2017). These groups or “aggregates” lack the “social and

cultural identities of real communities” (Alaimo and Kallinikos 2017).

Outliers and outlying behaviors are commonly ignored by algorithmic

models (Tene and Polonetsky 2013). Minority opinions are less visible

because algorithms assume that group behaviors are static and unlikely to

change (Tene and Polonetsky 2013). The material person’s world view is,

therefore, shaped by the application of an algorithm to a data set that may or

may not accurately represent them. The result is not only an increase in

external influence, but in external control (Benkler 2001). The material

person becomes an object of personalized information rather than an

autonomous subject which selects the content they receive (Ibrahim 2017).

Outcomes, opportunities and content (such as news, social media posts,

viewing recommendations) are dictated to the material person through the

ease and convenience of algorithmic models and signification. Physical

persons become part of a self-referential loop, prone to polarization and

deprived of conflicting opinions. The interpretant holds substantial

26

Electronic copy available at: https://ssrn.com/abstract=3791056


influence over the constitution of a physical person’s identity. It is precisely

this aspect which a new data and information ethics must address.

Figure 3 shows the link between signification, its outcomes in both digital

and physical space. These outcomes influence the evolution and

development of individual identity. Signification influences who we

become and our perception by others. Laws built on the legal person as

something singular and material cannot address this translation. Law that

focusses on consent as a means through which a physical person asserts

their autonomy ignores signification and the challenges of digital space.. In

the next section, this article explains why the consent is irrelevant to the

actions of an interpretant within and beyond the digital space.

What is consent

The previous section showed that data about the object, algorithms, the

representamen and the interpretant all have significant influence over

outcomes in the physical space. The examples showed that these outcomes

have concrete consequences that may not always benefit the material

person. Next we must consider whether this occurs with the consent of the

material person.

Consent is difficult to define. We know what is not consent. Consent plays

an important role in individual autonomy. Consent mediates obligations

27

Electronic copy available at: https://ssrn.com/abstract=3791056


between parties. Consent validates and justifies acts that fall under scrutiny

or substantiate claims for enforcement against others. Consent creates

relationships between material persons or extinguishes them, creates

binding obligations to others, and forces the performance of obligations

between parties. An absence of consent can ground legal actions under tort

or criminal law (Brosnan and Flynn 2017). Essentially, consent allows the

physical person to delimit what others can do to, for and against them.

An examination of consent, the interpretant and the digital space

Consent is an ideal that developed from its early origins around the 12th

century (Barnhart and Steinmetz 1988). A material person’s right to consent

or withhold consent is universally recognized.6 Legal precedent requires that

valid consent is given freely7 and occurs once the material person has

adequate notice, knowledge and choice as to how (and whether) they will be

affected. However, only the signifier has the power to act within the digital

space (Andrus 2017). Material consent is meaningless. For valid material

consent each signifier would need adequate notice, knowledge and choice.

Even if this were possible, material persons cannot consent to signification

and its unforeseen outcomes. Let’s return to our earlier example of

Interpretant A but add more physical world background to create a concrete

example of how consent fails within the digital space.

If we are talking about Interpretant A, then we can assume there is an

original Object A or Physical Person A (“PP-A”) according to whom

28

Electronic copy available at: https://ssrn.com/abstract=3791056


Interpretant A is constructed. For arguments’ sake, we will say that PP-A

comes from a long line of supporters of political party X. They are Pro-X in

their belief, but following a news report on radio, they are interested to

know more about some of the claims made by their opposition

(“Opposition”). PP-A returns home and googles some Opposition articles

to better understand their arguments. PP-A is not convinced and remains a

staunch Pro-X voter but begins to receive more posts on this particular

Opposition issue through social media, news media and so on.

PP-A is in a vulnerable position. Let’s assume in this situation that there is

no middle ground and that PP-A can either be Pro-X or Anti-X. This

example could then proceed in two different ways:

Firstly, as a member of Group 1, which is also considered Pro-X, PP-A must

actively seek information on this particular issue to fully resolve their

inquiry. Each search along these lines pushes them further into the outer

margins of Group 1 and further along the Belief X continuum from Pro-X

towards the Anti-X end within that group.

PP-A’s posts within that group become less visible to other Pro-X signifiers,

but increasingly visible to Anti-X. PP-A then receives more and more Anti-

X material, occasionally engaging with the more moderate and least

objectionable form of Anti-X content.

29

Electronic copy available at: https://ssrn.com/abstract=3791056


Eventually, PP-A will exist in an overlap between Anti-X and Pro-X, but

within those groups will receive less acknowledgement from Pro-X group

members but increasing support from Anti-X members. The example, while

simplistic, is familiar to any of us who have googled an opposing viewpoint

and gradually found ourselves more amenable to it.

Alternatively, or secondly, PP-A may be distracted by other concerns or

abandon their inquiry. As a member for Group 1 they continue to receive

more Pro-X content. Occasionally, those with more extreme Pro-X beliefs

than those of PP-A will share content that engages PP-A. As with our

previous example, this engagement – this time with Pro-X sentiment – will

push PP-A along the belief X continuum further towards the Pro-X end. As

this shift occurs, the Opposition view that initially drew MP-A’s attention,

recede further from MP-A’s purview.

At its most basic level, the interpretant has become the actor on which

digital decisions are made. The interpretant does not review the Terms and

Conditions (“T&Cs”), nor does it click “I agree”. The physical person gives

consent in the physical space. Consent presumes that the physical person’s

consent is relevant to subsequent acts within the digital space. However, the

physical person’s consent remains irrelevant unless the interpretant has

delegated agency from the material person. Alternatively, if the physical

person’s consent were to be so widely construed as to include the delegation

of agency to their interpretants, one cannot reasonably claim that this

30

Electronic copy available at: https://ssrn.com/abstract=3791056


consent is valid on the grounds that the physical person cannot be expected

to understand or anticipate the variety of outcomes and influences such

agency would produce.

Consent as a concession of power

The question then becomes whether the physical person has adequate

notice, knowledge or choice to authorize the signifier to act as an agent on

their behalf or whether the signifier is capable of giving material consent.

In either of these instances, it seems unlikely the physical person intended

for consent to extend so far as to have them completely replaced by an

unknown entity. To imply that signification is justified under consent would

be to concede all of the physical person’s power and authority to an

unknown actor for any and all circumstances.

Whether it is between the physical person and the interpretant, or between

physical persons interacting through digital space, consent remains a

concession of power to another party. For example, consent allows

consumer exchanges of property. Under ideal circumstances a seller accepts

legal entry into a relationship that allows the buyer to take their property,

and in exchange, the buyer accepts entry into a legal relationship that allows

the seller to keep money, data or property that otherwise belongs to them.

Both parties concede their right to claim back their property without legal

31

Electronic copy available at: https://ssrn.com/abstract=3791056


ramification. Each party is free to deal with the exchanged property as they

see fit – or as agreed in the T&Cs.

A crucial part of valid material consent relies on the parties’ opportunities to

review and understand the T&Cs. Given that the signifier has no

opportunity to review the T&Cs, we shall proceed on the basis that consent

presumes that physical persons have consented to the use of their data in

various means regardless of signification. Nonetheless, the stated terms

should be transparent and readily understood.8 In addition, standard form

contracts (or “adhesion contracts”) do not allow negotiation between the

parties (Leff 1970, David A Hoffman 2018). In adhesion contracts, T&Cs

exist on a “take-it-or-leave-it” basis (Obar and Oeldorf-Hirsch). The use of

adhesion contracts by monopoly providers (such as Google and Facebook)

reduces the physical person’s options to compliance or missing out (Obar

and Oeldorf-Hirsch 2018). In many instances the physical person’s consent

to the access of personal data and subsequent data use in exchange for a free

service or discount.

Free social media platforms and digital services are not “free” (Hoofnagle

and Whittington 2014; Hull 2015). Physical persons pay through non-

pecuniary performance at a data and privacy cost. The initial draft of the

Proposal for a Directive of the European Parliament and of the Council on

certain aspects concerning contracts for the supply of digital content (“EU

DCD proposal”) attempted to address the exchange of free goods and

32

Electronic copy available at: https://ssrn.com/abstract=3791056


services for data. The proposal recognized that the distinction between free

and paid services discriminated between traditional and new business

models: money exchange versus data exchange. The disparity effectively

incentivized the exchange of digital content and services for data (EU DCD

Proposal, Rec (13)).

The Directive as adopted earlier this year includes contracts where the

“trader supplies or undertakes to supply” digital content or services

provided the consumer “provides or undertakes to provide personal data to

the trader” (Directive on contracts of supply of digital content and digital

services, Art 3, 1). Unfortunately, these exclusions are broadly defined.

Exchange contracts which process data for the exclusive purposes of

“improving security, compatibility or interoperability” are not subject to the

Directive (Directive on contracts of supply of digital content and digital

services, Art. 3, 5(f)). Perhaps the reader recognizes these terms from

privacy notices they have previously skimmed? They are not uncommon

terms. Nonetheless, the move towards regulating free services to the

consumer is encouraging even if the outcomes are uncertain. It is clear,

however, that material persons are unaware of the true cost of these

transactions (Grafanaki 2017). This article has already discussed the

negative impact that signification has on self-determination. The physical

person also suffers a privacy loss.

33

Electronic copy available at: https://ssrn.com/abstract=3791056


Concession and privacy are inextricably linked (Ortiz 1989). For Ortiz,

privacy creates a boundary between the values of the physical person and

their community. Ideally, the physical person only agrees to share their data

if willing to concede to a loss in privacy over that data. The result is a kind

of “privacy calculus” where material persons weigh losses in privacy

against the benefits of opting-in (Gómez-Barroso et al. 2018).

Privacy is also relative in that it exists only in reference to the person from

whom we wish to hide (Hand 2018). The constant connectivity of the digital

space provides a feeling of connection that is also isolating (Debord 1931-

1970). Isolation gives a false impression of privacy. While physical persons

interact via a network of screens they exist as part of a circuit. They are

limited by and to that network by convenience. The personal privacy gained

by our concessions in digital privacy occurs without adequate knowledge,

notice and choice.

For many, of the most appealing aspects of digital self-expression is the

feeling of anonymity. Selective expression in digital space enables the

physical person to conceal some aspects of themselves from some within

their social network. However, once these expressions have entered digital

space, the physical person cannot prevent their access by other unforeseen

parties. In order to give valid consent, physical person must understand the

concessions they make. Thus, disclosure remains crucial to consent and is a

substantial focus for legislation: If knowledge is power, disclosure offers the

34

Electronic copy available at: https://ssrn.com/abstract=3791056


physical person the knowledge to properly consent. The question remains

whether legal disclosures are effective in the digital realm.

The quality of these digital disclosures directly affects the quality of notice

the individual receives. However, the digital space poses a two-fold threat to

individual notice. First, studies estimate that less than 10% of users who

access End User License Agreements (“EULAs”) spend more than two

minutes reviewing their contents (Bakos et al. 2014). Given the length of

these disclosures, it seems unlikely that a 2-minute perusal has provided

sufficient notice for the physical person to acquire the requisite knowledge

for valid consent. In fact, studies suggest that physical persons are not fully

informed, do not want to be informed, and are incapable of understanding

and processing the details of these agreements (Ben-shahar and Schneider

2011, Wilkinson-Ryan 2014).

Secondly, the ever-increasing requirements for transparency and clarity

produce longer disclosure documents filled with more and more detail (Ben-

Shahar and Schneider 2011). The result is “disclosure overload” where

standard form disclosures have become “so long and elaborate that

disclosers have problems … assembling the information … [and

Individuals] … cannot understand, assimilate [nor] analyze the avalanche

of information” (Ben-Shahar and Schneider 2011). In addition, en masse

disclosures create an “accumulation problem” where individuals are

overwhelmed with disclosure documents and those who wish to read the

35

Electronic copy available at: https://ssrn.com/abstract=3791056


documents cannot do so proficiently (Ben-Shahar and Schneider 2011). We

cannot continue to rely on notice, when notice has become so impractical as

to be virtually impossible.

Our third requirement for valid consent is choice. This is also problematic in

the digital space given the consenting party may be unaware that consent

has taken place (Obar and Oeldorf-Hirsch 2018). The option to click “I

agree” channels individual attention elsewhere to keep the individual “in a

buying mood” and carries little gravity (Obar and Oeldorf-Hirsch 2018).

The ease of “I agree” also discourages and possibly thwarts attempts at

critical inquiry (Obar and Oeldorf-Hirsch 2018). In addition, clickwraps and

browse wraps such as “I agree” also exploit our bias towards agreement by

triggering a system of heuristics that bypass deeper reflection in favor of

impulsive and rapid decision-making (Leonhard 2017). Free choice is also

undermined by the significant market power held by companies such as

Google and Facebook. In its final report, published June 2019, the

Australian Competition and Consumer Commission notes the “considerable

imbalance in bargaining power between digital platforms and individuals”

(Australian Competition and Consumer Commission 2019). For many, the

cost of “opting-out” of digital consent will be too high. Regardless,

disclosure and transparency cannot and do not solve the problems of

consent in the digital space.

36

Electronic copy available at: https://ssrn.com/abstract=3791056


The European Union’s General Data Protection Regulation (“GDPR”) is

one of the most significant and recent developments in the law of consent as

it applies to data (Englezos 2019). The GDPR crosses international borders

and, therefore, provides a new de facto global standard for data regulation

(Englezos 2019). However, the GDPR demonstrates the same continued

reliance on consent and disclosure as its predecessors and contemporaries.

As we have seen, the digital space and physical space could not be more

different. The former is immaterial but interminable, while the latter is

material and ephemeral. The new inclusions and new rights contained

within the GDPR cannot successfully address the unique problems of

consent in and beyond the digital space because its focus remains on making

the consent of the physical world fit the digital space. As outlined above, no

matter what our approach – be it through the consent of the physical person

or the interpretant – neither have adequate notice, knowledge or choice to

give valid consent.

Conclusion

This article provides a new lens through which we can understand the

digital space and seeks to add to academic discussion by offering a new

model of digital actors. In digital space we have interpretants instead of

physical persons. Acts based on interpretants influence the development of

the physical person just as the actions of the physical person continue to

shape the interpretant in return. As academics continue to “bang their

37

Electronic copy available at: https://ssrn.com/abstract=3791056


heads” against the proverbial brick wall of consent and privacy law in their

attempts to address the ethical challenges of digital space, commercial

interests will continue to drive the production of digital content. Social

media platforms and other digital content providers will continue to shape

and inform our individual realities. In reality, we have limited control over

our data (if any) and limited control over who has access.

In-person, we have input or oversight of our appearance and some control

over how we appear to others. We can choose to look composed and

professional in an interview, relaxed and affable with friends, and be

childish with our children. In the digital space, we have no such control

over how we appear, and it is much more difficult to segregate these

different personas (Belk 2013). The physical space and digital space have

become so intertwined that there is no clear distinction between the two.

Consequently, occurrences in digital space shape the way material persons

perceive each other in the physical world as well as in digital space. Within

a space filled with collective memories, the digital interpretant is

continuously informed and reformed according to occurrences both in- and

outside of the digital space. The object, the representamen and the

interpretant become increasingly difficult to separate while none of them

have adequate knowledge, notice or choice to give valid consent.

For now, the first step is to recognize that signification renders consent unfit

digital space. In future, legal scholars may need to consider signification

38

Electronic copy available at: https://ssrn.com/abstract=3791056


as locus operandi for the law of consent or other laws which govern digital

interaction, while ethicists consider the much larger threat to individual

autonomy. The most successful approach may be one that recognizes the

novelty and challenges of the digital space and focuses on signification.

Precisely what this is remains a fascinating subject for further consideration.

Oversight of the physical person’s signification would provide the physical

person with an opportunity to review the accuracy of their substitute. This

approach is a significant departure from the “right to know” when

automated decisions are made about you (GDPR Art 22). Oversight of

signification implements a stage of review before the decision is made.

This can only work if there are limits on the creation of unique signifiers –

otherwise the process of oversight would be as burdensome as current

models of material world disclosure. Further research in this area is needed.

At the very least, it is time to recognize the failure of material consent as a

solution to the problems of the digital space.

Declaration

The author has no conflicts of interest to declare.

1
Signified and signifier are the commonly used English translations of signifié and
signifiant from the first published edition of Cours de linguistique générale
2
adapted from Chandler, 836.
3
I have specifically adopted the term “personal data” in this article in keeping with the
European Union’s General Data Protection Regulation, Art 4 (“GDPR”).
4
Although GDPR OJ L 119/1 Art, 17 and other similar legislation allow the natural person
to request the erasure of their data, the data can never be entirely erased from the digital
space.

39

Electronic copy available at: https://ssrn.com/abstract=3791056


5
See, for example “ZipRecruiter” video How ZipRecruiter works: The smartest way to

hire, for any size company, in every industry which states in relation to employers who post

job advertisements on Ziprecruiter.com “Once you post, our matching technology actively

scam millions of resumes and profiles on ZipRecruiter and 100+ job boards and invites

qualified applicants to apply. As applicants come into your dashboard, we analyse each one

and spotlight the top candidates to make sure you never miss a great match.” from 0:21.
6
Whether this right is respected and observed is a matter for separate discussion.

7
See, for example, Commercial Bank of Australia v Amadio (1983) 151 CLR 447, Louth v

Diprose (1992) 155 CLR 621 and Blomeley v Ryan (1956) 99 CLR 362 in contract law and

Re T (adult: refusal of medical treatment) [1992] 4 All ER 649 in medical consent.


8
See for example Australian legislation such as the Competition and Consumer Law 2010

(Cth), Sch 2 (Australian Consumer Law), s 24(2).

Reference List

Alaimo, C., & Kallinikos, J. (2017). Computing the everyday: Social media

as data platforms. Information Society, 33(4), 175–191.

All ER. Re T (adult: refusal of medical treatment) [1992] 4 All ER 649

(1992).

Altman, N. S. (1992). An Introduction to Kernel and Nearest-Neighbor

Nonparametric Regression. The American Statistician. Alexandria:

Taylor & Francis Group.

Amer, K., & Noujaim, J. (2019). The Great Hack. United States of

America: Netflix.

Andrus, M. T. (2017). Not Without My Consent: Preserving individual

liberty in light of the comprehensive collection and consolidation of

40

Electronic copy available at: https://ssrn.com/abstract=3791056


personally identifiable information. Journal of Internet Law, 20(9), 1.

Australian Competition and Consumer Commission, “Digital Platforms

Inquiry: Final Report” (June 2019).

Bakos, Y., Marotta-Wurgler, F., & Trossen, D. R. (2014). Does Anyone

Read the Fine Print? Consumer Attention to Standard-Form Contracts.

The Journal of Legal Studies, 43(1), 1–35.

https://doi.org/10.1086/674424

Barnhart, R. K., & Steinmetz, S. (1988). The Barnhart dictionary of

etymology. The Bronx, New York: H. W. Wilson.

Barron, L. (2011). Living with the Virtual: Baudrillard, Integral Reality, and

Second Life. Cultural Politics, 7(3), 391–408.

Baudrillard, Jean; Turner, C. (2010). Carnival and Cannibal: Ventriloquous

Evil. Seagull Books.

Baudrillard, J. (1998). The consumer society: myths and structures. London;

Thousand Oaks, Calif; Sage Publications.

Baudrillard, J. (2002). The Anti-Aesthetic: essays. In H. Foster (Ed.), The

Anti-Aesthetic: Essays on post-modern culture (p. 145). The New

Press.

Baudrillard, J., & Benedict, J. (1993). The transparency of evil: essays on

extreme phenomena. London; New York; Verso.

Baudrillard, J., & Glaser, S. F. (2017). Simulacra and Simulation. Michigan:

University of Michigan Press.

Bayes, T. (1763). An Essay towards Solving a Problem in the Doctrine of

Chances. By the Late Rev. Mr. Bayes, F. R. S. Communicated by Mr.

41

Electronic copy available at: https://ssrn.com/abstract=3791056


Price, in a Letter to John Canton, A. M. F. R. S. Philosophical

Transactions (1683-1775). The Royal Society.

Belk, R. W. (2013). Extended Self in a Digital World. Journal of Consumer

Research, 40(3), 477–500.

Ben-shahar, O., & Schneider, C. E. (2011). The Failure of Mandated

Disclosure. University of Pennsylvania Law Review, 159(3), 647–749.

Benkler, Y. (2001). Siren songs and Amish children: Autonomy,

information, and law. New York University Law Review, 76(1), 23–

113.

Blomley v Ryan (1956) 99 CLR 362.

Brosnan, L., & Flynn, E. (2017). Freedom to negotiate: A proposal

extricating “capacity” from “consent.” International Journal of Law in

Context, 13(1), 58–76.

Brownlee, J. (2017). Difference Between Classification and Regression in

Machine Learning. Machine Learning Mastery.com.

https://machinelearningmastery.com/classification-versus-regression-

in-machine-learning/.

Chandler, Daniel (2017) Semiotics: The Basics, New York; Routledge.

Clarke, D. B. (2010). Dreams Rise in Darkness: The White Magic of

Cinema. Film-Philosophy, 14(2), 21–40.

Commercial Bank of Australia v Amadio (1983) 151 CLR 447

Condliff, M. K., Madigan, D., Lewis, D. D., & Posse, C. (1999). Bayesian

Mixed-Effects Models for Recommender Systems. In ACM SIGIR ’99

Workshop on Recommender Systems: Algorithms and Evaluation.

42

Electronic copy available at: https://ssrn.com/abstract=3791056


Debnath, L., & Basu, K. (2015). A short history of probability theory and its

applications. International Journal of Mathematical Education in

Science and Technology, 46.

Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic

investigation of computational power structures. Digital Journalism,

3(3), 398–415.

Dowe, D. L., Service), S. (Online, & Nature, S. (2013). Algorithmic

Probability and Friends. Bayesian Prediction and Artificial

Intelligence: Papers from the Ray Solomonoff 85th Memorial

Conference, Melbourne, VIC, Australia, November 30 - December 2,

2011 (Vol. 7070). Berlin, Heidelberg: Springer Berlin Heidelberg.

Englezos, E. (2019). A new world standard?: Why Australian businesses

should be ensuring their compliance with the EU “general data

protection regulation.” Intellectual Property Forum: journal of the

Intellectual and Industrial Property Society of Australia and New

Zealand.

European Council and Parliament, Proposal for a Directive of the European

Parliament and of the Council on certain aspects concerning contracts

for the supply of digital content 9.12.2015 COM(2015) 634 final

2015/COD0287 (COD).

European Parliament and Council, “Regulation 2016/679/EC of 27 April

2016 on the “protection of natural persons with respect to the

processing of personal data and on the free movement of such data, and

repealing Directive 845/46/EC” (General Data Protection Regulation)”

43

Electronic copy available at: https://ssrn.com/abstract=3791056


OJ L 119/1

Gillespie, T. (2017). Algorithmically recognizable: Santorum’s Google

problem, and Google’s Santorum problem. Information,

Communication & Society, 20(1), 63–80.

Gómez-Barroso, J.-L., Feijóo, C., & Martínez-Martínez, I. J. (2018).

Privacy calculus: Factors that influence the perception of benefit.

Profesional de la Informacion, 27(2), 341–348.

https://doi.org/10.3145/epi.2018.mar.12

Gomez-Uribe, C., & Hunt, N. (2015). The Netflix Recommender System:

Algorithms, Business Value, and Innovation. ACM Transactions on

Management Information Systems (TMIS). ACM.

Grafanaki, S. (2017). Autonomy challenges in the age of Big Data.

Fordham Intellectual Property, Media & Entertainment Law Journal,

27(4), 803.

Guest, T. (2008). Second Lives (First.). Random House, UK.

Hand, D. J. (2018). Aspects of Data Ethics in a Changing World: Where

Are We Now? Big Data, 6(3), 176–190.

Harper, T. (2017). The big data public and its problems: Big data and the

structural transformation of the public sphere. New Media and Society,

19(9), 1424–1439.

Harrison, O. (2018). Machine Learning Basics with the K-Nearest

Neighbors Algorithm. https://towardsdatascience.com/machine-

learning-basics-with-the-k-nearest-neighbors-algorithm-6a6e71d01761

Hoffman, D. A. (2018). Relational Contracts of Adhesion. The University of

44

Electronic copy available at: https://ssrn.com/abstract=3791056


Chicago Law Review, 85(6), 1395–1462.

Hoofnagle, C. J., & Whittington, J. (2014). Free: Accounting for the costs of

the internet’s most popular price. UCLA Law Review, 61(3), 606–670.

Hull, G. (2015). Successful failure: what Foucault can teach us about

privacy self-management in a world of Facebook and big data. Ethics

and Information Technology, 17(2), 89–101.

Ibrahim, Y. (2017). Coalescing the mirror and the screen: consuming the

“self” online. CONTINUUM-JOURNAL OF MEDIA & CULTURAL

STUDIES, 31(1), 104–113.

Koch, A. (2005). Cyber Citizen or Cyborg Citizen: Baudrillard, Political

Agency, and the Commons in Virtual Politics. Journal of Mass Media

Ethics, 20(2&3), 159.

Leff, A. A. (1970). Contract as Thing. American University Law Review,

19(2), 131.

Leonhard, C. (2017). DANGEROUS OR BENIGN LEGAL FICTIONS,

COGNITIVE BIASES, AND CONSENT IN CONTRACT LAW. St.

John’s Law Review, 91(2), 385–426.

Louth v Diprose (1992) 175 CLR 621

Lucy, N., Hartley 1948, J., Thwaites, T., Colebrook, C., Tofts, D., Briggs

1972, R., et al. (2016). A dictionary of postmodernism (First.).

Chichester, West Sussex, UK; Malden, MA; Wiley-Blackwell.

Mantelero, A. (2016). Personal data for decisional purposes in the age of

analytics: From an individual to a collective dimension of data

protection. Computer Law & Security Review, 32(2), 238–255.

45

Electronic copy available at: https://ssrn.com/abstract=3791056


Marr, B. (2018). Here’s Why Data Is Not the New Oil. Forbes (online).

https://www.forbes.com/sites/bernardmarr/2018/03/05/heres-why-data-

is-not-the-new-oil/#55aab76a3aa9. Accessed 18 April 2019

Nikolov, D., Lalmas, M., Flammini, A., & Menczer, F. (2019). Quantifying

Biases in Online Information Exposure. Journal of the Association for

Information Science and Technology, 70(3), 218–229.

Nunes, M. (1995). Jean Baudrillard in Cyberspace: Internet, Virtuality, and

Postmodernity. STYLE, 29(2), 314–327.

Obar, J. A., & Oeldorf-Hirsch, A. (2018). The Clickwrap: A Political

Economic Mechanism for Manufacturing Consent on Social Media.

Social Media + Society, 4(3), 1.

Ortiz, D. R. (1989). Privacy, Autonomy, and Consent. Harvard Journal of

Law and Public Policy, 12(1), 91–97.

Pariser, E. (2011). The filter bubble: what the Internet is hiding from you.

New York: Penguin Press.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that

Control Money and Information (1st ed.). Harvard University Press.

Pazzani, M. J., & Billsus, D. (2007). Content-Based Recommendation

Systems BT - The Adaptive Web: Methods and Strategies of Web

Personalization. In P. Brusilovsky, A. Kobsa, & W. Nejdl (Eds.), (pp.

325–341). Berlin, Heidelberg: Springer Berlin Heidelberg.

Peacock, S. E. (2014). How web tracking changes user agency in the age of

Big Data: The used user. Big Data & Society, 1(2).

Peppet, S. R. (2014). Regulating the internet of things: First steps toward

46

Electronic copy available at: https://ssrn.com/abstract=3791056


managing discrimination, Privacy, Security, a;nd consent. Texas Law

Review, 93(1), 85–179.

Polanyi 1886-1964, K. (1957). The great transformation. Boston: Beacon

Press.

Rendle, S., Freudenthaler, C., Gantner, Z., & Schmidt-Thieme, L. (2009).

BPR: Bayesian Personalized Ranking from Implicit Feedback. In

Proceedings of the Twenty-Fifth Conference on Uncertainty in

Artificial Intelligence (pp. 452–461). Arlington, Virginia, United

States: AUAI Press.

Sadowski, J. (n.d.). When data is capital: Datafication, accumulation, and

extraction. Big Data & Society, 6(1), 205395171882054.

Sarwar, B. M., Karypis, G., Konstan, J. A., & Riedl, J. (n.d.). Item-based

collaborative filtering recommendation algorithms.

Saussure, Ferdinand de, (1916/1967) Cours de linguistique générale (First


ed.) Paris: Payot.
Saussure, F. d., 1857-1913, & Harris, R., 1931. (2013). Course in general

linguistics (New ed.). London: Bloomsbury Academic.

Shynk, J. J. (n.d.). Probability, random variables, and random processes:

theory and signal processing applications (1st;1;). Hoboken, NJ:

Wiley.

Sulkowski, M. L., & Picciolini, C. (2018). The Path into and out of Violent

Extremism--Part 1: How Youth Become Radicalized into Violent

Extremism. Communique. Bethesda: National Association of School

Psychologists.

47

Electronic copy available at: https://ssrn.com/abstract=3791056


Supiot, A. (2017). Homo Juridicus: On the Anthropological Function of the

Law (2nd ed.). Verso.

Tene, O., & Polonetsky, J. (2013). Big data for all: privacy and user control

in the age of analytics. Northwestern Journal of Technology and

Intellectual Property, 11(5), 239.

Wen, Y. F., Wong, H. M., Lin, R., Yin, G., & McGrath, C. (2015). Inter-

Ethnic/Racial Facial Variations: A Systematic Review and Bayesian

Meta- Analysis of Photogrammetric Studies. PLoS ONE, 10(8).

Wilkinson-Ryan, T. (2014). A psychological account of consent to fine

print. Iowa Law Review, 99(4), 1745–1784.

Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods.

Science Technology and Human Values, 41(1), 3–16.

48

Electronic copy available at: https://ssrn.com/abstract=3791056

You might also like